Movatterモバイル変換


[0]ホーム

URL:


CN111178456B - Abnormal index detection method and device, computer equipment and storage medium - Google Patents

Abnormal index detection method and device, computer equipment and storage medium
Download PDF

Info

Publication number
CN111178456B
CN111178456BCN202010041844.6ACN202010041844ACN111178456BCN 111178456 BCN111178456 BCN 111178456BCN 202010041844 ACN202010041844 ACN 202010041844ACN 111178456 BCN111178456 BCN 111178456B
Authority
CN
China
Prior art keywords
time series
anomaly detection
original time
time sequence
short term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010041844.6A
Other languages
Chinese (zh)
Other versions
CN111178456A (en
Inventor
张戎
董善东
胡婧茹
汪华
李剑锋
李雄政
聂鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN202010041844.6ApriorityCriticalpatent/CN111178456B/en
Publication of CN111178456ApublicationCriticalpatent/CN111178456A/en
Application grantedgrantedCritical
Publication of CN111178456BpublicationCriticalpatent/CN111178456B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to an abnormal index detection method, an abnormal index detection device, computer equipment and a storage medium. The invention provides an abnormal index detection method. The method comprises the following steps: acquiring an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be measured in the original time sequence, and the historical data points comprise an index value sequence reported at a time point before the time point to be measured, which is arranged in the original time sequence according to a reporting time sequence; inputting the original time sequence into an anomaly detection model, processing the original time sequence by the anomaly detection model to obtain an anomaly detection result aiming at the target data point, and determining whether an index reported by the time point to be detected is abnormal or not according to the anomaly detection result of the target data point, wherein the anomaly detection model is obtained by deep learning and training. The technical scheme provided by the invention has the advantages of high recall rate, high accuracy and wide application scene.

Description

Abnormal index detection method and device, computer equipment and storage medium
Technical Field
The present invention relates to the technical field of artificial intelligence and machine learning, and in particular, to an abnormal index detection method, apparatus, computer device, and storage medium.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach to make computers have intelligence, and is applied in various fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and the like.
An information technology service performance indicator is typically a set of numerical data. The traditional scheme for detecting the abnormal index by artificially setting the threshold has high maintenance cost, the threshold needs to be artificially set according to observation or experience, and a point exceeding the threshold is considered as an abnormal point. In the case of Dapeng Liu et al "Opprence: in the methods of towards Practical and automatic analysis detection through machine learning, proceedings of the 2015 Internet Measurement reference ACM 2015, a system for abnormal value detection based on feature engineering and random forest method is proposed. Through the test of the last test set, the accuracy can reach more than 0.83 under the condition that the recall rate is more than 0.66. The maintenance cost of the prior art is high, and the labor input is needed to maintain the threshold value; the service scene is narrow, the recall rate is low, and the service regulation speed cannot be kept up with; and because the numerical value forms are different, the accuracy is low because a simple threshold value is difficult to cover all curves.
Disclosure of Invention
Embodiments of the present invention at least partially address the above-mentioned problems.
According to a first aspect of the present invention, an abnormal index detection method is provided. The method comprises the following steps: acquiring an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be measured in the original time sequence, and the historical data points comprise an index value sequence reported at a time point before the time point to be measured, which is arranged in the original time sequence according to a reporting time sequence; the method comprises the steps of inputting an original time sequence into an abnormality detection model, processing the original time sequence by the abnormality detection model to obtain an abnormality detection result aiming at a target data point, and determining whether an index reported by a time point to be detected is abnormal or not according to the abnormality detection result of the target data point, wherein the abnormality detection model is obtained by deep learning and training.
In some embodiments, the training of the anomaly detection model by deep learning comprises: i. acquiring a sample time sequence and a mark corresponding to the sample time sequence; inputting the sample time sequence and the mark into an abnormality detection model, and processing the sample time sequence and the mark by the abnormality detection model to obtain an abnormality detection result for the target data point; adjusting the anomaly detection model based on the anomaly detection result and the marker; iterating the steps i-iii for M times, wherein M is a preset iteration number.
In some embodiments, the constant detection model includes a plurality of parallel processing channels including a first channel, a second channel, and a third channel. The first channel is configured to perform windowing on the original time series with different window sizes to generate a plurality of windowed time series having different data lengths, and perform first fully-connected neural network processing on the plurality of windowed time series. The second channel is configured to perform downsampling on the original time series with different sampling intervals to generate a plurality of downsampled time series having different time resolutions, and perform second fully-connected neural network processing on the plurality of downsampled time series, and wherein the third channel is configured to determine a mean of a plurality of segments of the original time series to generate a mean time series, and perform third fully-connected neural network processing on the plurality of mean time series. Splicing the outputs of the multiple parallel processing channels; and performing Softmax 2 classification on the spliced output.
Softmax is a function for implementing multi-classification. It maps some of the output neurons to real numbers between (0, 1) and normalizes the real numbers so that the sum of the probabilities for each class is 1. The definition of the Softmax function is as follows:
Figure 127636DEST_PATH_IMAGE002
wherein
Figure DEST_PATH_IMAGE004AAA
Is the output of the pre-stage output unit of the classifier. i represents a category index, and the total number of categories is C.
Figure DEST_PATH_IMAGE006AAA
Indicating the ratio of the index of the current element to the sum of the indices of all elements. The output values of the multiple classes can be converted into relative probabilities by a Softmax function.
In some embodiments, the plurality of windowed time series includes N windowed time series, N being an integer greater than or equal to 2, and wherein the first fully-connected neural network processing includes: inputting the N windowed time sequences into respective corresponding fully-connected neural network to obtain corresponding N fully-connected neural network outputs; splicing the ith fully-connected neural network output and the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output, wherein i is an integer variable, and the initial value of i is 1; the following loop is performed until i equals N:
inputting the ith spliced output into an intermediate fully-connected neural network to obtain an intermediate fully-connected neural network output;
increasing the value of i progressively;
splicing the intermediate fully-connected neural network output with the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output; and is
An intermediate fully-connected neural network output is provided as an output of the first channel.
In some embodiments, the anomaly detection model includes two or more long-short term memory units connected in series, the two or more long-short term memory units including a first long-short term memory unit through a qth long-short term memory unit, Q being an integer greater than or equal to 2, the two or more long-short term memory units configured to perform the steps of:
the first stage is as follows:
the first long short term memory unit is configured to perform windowing on the original time series with a first window size to generate a first windowed time series having a first data length, and perform long short term memory processing on the first windowed time series resulting in a first long short term memory output;
and a second stage:
the Pth long-short term memory unit is configured to perform windowing on the original time sequence by using a Pth window size to generate a Pth windowed time sequence with a Pth data length, splice the P-1 th long-short term memory output and the Pth windowed time sequence, and perform long-short term memory processing on the spliced sequence to obtain a Pth long-short term memory output, wherein the P is 2 at the beginning, and the P is an integer which is more than or equal to 2 and less than or equal to Q;
repeating the steps in the second stage until P equals to Q, and obtaining the Q-th long-short term memory output;
softmax 2 classification is performed on the qth long-short term memory output.
In some embodiments, the method further comprises, prior to inputting the original time series into the anomaly detection model: and performing primary anomaly identification on the original time sequence through primary judgment.
In some embodiments, the primary decision method comprises a statistical decision method, and the primary anomaly identification of the original time series by the primary decision comprises: extracting historical data points from the original time series; determining the mean value and the standard deviation of historical data points by a statistical decision method; determining a numerical value interval meeting the random error according to the mean value and the error; in response to the target data point being outside the range of values, the original time series is identified as anomalous.
In some embodiments, the primary decision method comprises an unsupervised method, and the primary anomaly identification of the original time series by the primary decision comprises: extracting each data point in the original time sequence; classifying the extracted data points through an unsupervised algorithm to obtain a classification result; and performing abnormity judgment on the time sequence based on the classification result.
In some embodiments, the method further comprises: and sending an alarm message in response to the abnormity detection result being the abnormity of the original time sequence.
In some embodiments, the alert message comprises: short message warning message, application program warning message and small program warning message.
According to a second aspect of the present invention, an abnormality index detection apparatus is provided. The device comprises an acquisition module and an abnormality detection module. The acquisition module is configured to acquire an original time sequence, the original time sequence including a target data point and a historical data point before the target data point, the target data point including an index value reported at a time point to be measured in the original time sequence, and the historical data point including an index value sequence reported at a time point before the time point to be measured, which is arranged in the original time sequence according to a reporting time sequence. The anomaly detection module is configured to input the original time sequence into an anomaly detection model, the anomaly detection model processes the original time sequence to obtain an anomaly detection result for the target data point, and determines whether an index reported by the time point to be detected is abnormal according to the anomaly detection result of the target data point, wherein the anomaly detection model is obtained by deep learning and training.
In some embodiments, the anomaly detection model includes a plurality of parallel processing channels including a first channel, a second channel, and a third channel. Wherein the first channel is configured to perform windowing on the original time series with different window sizes to generate a plurality of windowed time series having different data lengths, and to perform first fully-connected neural network processing on the plurality of windowed time series. Wherein the second channel is configured to perform downsampling on the original time series with different sampling intervals to generate a plurality of downsampled time series having different time resolutions, and perform second fully-connected neural network processing on the plurality of downsampled time series, and wherein the third channel is configured to determine a mean of a plurality of segments of the original time series to generate a mean time series, and perform third fully-connected neural network processing on the plurality of mean time series; splicing the outputs of the plurality of parallel processing channels; and performing Softmax 2 classification on the spliced output.
In some embodiments, wherein the anomaly detection model comprises two or more long-short term memory units connected in series, the two or more long-short term memory units comprising a first long-short term memory unit through a qth long-short term memory unit, Q being an integer greater than or equal to 2, the two or more long-short term memory units configured to perform the following steps;
the first stage is as follows:
the first long short term memory unit is configured to perform windowing on the original time series with a first window size to generate a first windowed time series having a first data length, and perform long short term memory processing on the first windowed time series resulting in a first long short term memory output;
and a second stage:
the Pth long-short term memory unit is configured to perform windowing on the original time sequence by using a Pth window size to generate a Pth windowed time sequence with a Pth data length, splice the P-1 th long-short term memory output and the Pth windowed time sequence, and perform long-short term memory processing on the spliced sequence to obtain a Pth long-short term memory output, wherein the P is 2 at the beginning, and the P is an integer which is more than or equal to 2 and less than or equal to Q;
repeating the steps in the second stage until P equals to Q, and obtaining the Q-th long-short term memory output;
softmax 2 classification is performed on the qth long-short term memory output.
According to some embodiments of the invention, there is provided a computer device comprising: a processor; and a memory having instructions stored thereon, the instructions, when executed on the processor, causing the processor to perform any of the above methods.
According to some embodiments of the invention, there is provided a computer readable storage medium having stored thereon instructions which, when executed on a processor, cause the processor to perform any of the above methods.
The invention carries out the abnormity detection of the time sequence based on the deep learning model and has the following advantages: the technical scheme provided by the invention is end-to-end intelligent detection, manual setting of a detection threshold value is not needed, and detection and judgment are only needed through a deep learning model. The technical scheme provided by the invention has high recall rate, and the recall rate can be further improved along with the increase of the data volume. The technical scheme provided by the invention has high accuracy, and the accuracy can be further improved along with the increase of the data volume. In addition, the technical scheme provided by the invention has wide application scenes, the existing model is easy to expand to other scenes, only the data format is matched and the data labeling work is completed, different types of positive and negative samples are added on the basis of the existing model, and the model can be adapted to the application scenes through iterative incremental training learning.
Drawings
Embodiments of the invention will now be described in more detail, by way of non-limiting examples only, with reference to the accompanying drawings, in which like reference numerals refer to like parts throughout, and in which:
FIG. 1 shows a schematic diagram of an application scenario of a data anomaly detection method according to an embodiment;
FIGS. 2a-2c respectively illustrate schematic diagrams of a user interface for alerting of a time series of anomalies, in accordance with one embodiment of the present invention;
FIG. 3 shows a flow diagram of an anomaly indicator detection method according to one embodiment;
FIG. 4 shows a schematic diagram of an original time series according to an embodiment;
FIG. 5 shows a flow diagram of a method of sample data annotation;
FIG. 6 shows a schematic diagram of a time series annotation platform;
FIG. 7 is a flow diagram illustrating a method for off-line training and on-line testing of a deep learning model according to an embodiment of the invention;
FIG. 8 shows a schematic diagram of a network structure of a stacked tiled fully-connected neural network HSDNN;
FIG. 9 shows a schematic diagram of a WLSTM network structure for a windowed long term memory network;
FIG. 10 is a schematic diagram of an apparatus for off-line training and on-line testing of deep learning models, according to an embodiment of the invention; and
FIG. 11 shows a schematic diagram of an example computer device for anomaly indicator detection.
Detailed Description
The time series abnormity detection has great significance for information technology. The performance index of the information technology service is generally used for representing performance parameters of an information technology system, and may include indexes such as user access amount, query request amount, query success amount, CPU utilization rate, storage utilization rate, and network resource utilization rate. The performance index of the information technology service is usually a group of time series data, and important index parameters such as user access amount and server working condition can be obtained by monitoring the time series. When the IT service is abnormal or fails, the service indexes which have problems can be quickly detected through time sequence abnormality detection, so that the work of information technology resource scheduling, abnormality repairing and the like can be better carried out, and stable user experience is provided for users.
The following description provides specific details for a thorough understanding and enabling description of various embodiments of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these details. In some instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the disclosure. The terminology used in the present disclosure is to be understood in its broadest reasonable manner, even though it is being used in conjunction with a particular embodiment of the present disclosure.
First, some terms related to the embodiments of the present disclosure are explained so that those skilled in the art can understand that:
1. deep learning: is a branch of machine learning, algorithms that perform high-level abstractions on data using multiple processing layers that contain complex structures or consist of multiple nonlinear transformations. Deep learning unifies feature extraction work and classification/fitting into a framework, learns and extracts features through a data set, and is a method capable of automatically learning and extracting features.
2. Positive sample: samples corresponding to categories intended to be correctly classified. Here, the positive sample refers to a time-series sample in which no abnormality exists.
3. Negative sample: samples corresponding to categories other than the correctly classified category. Here, the negative sample refers to a time-series sample in which an abnormality exists.
4. The precision ratio is as follows: the correctly retrieved items (TP) account for all the actually retrieved items (TP + FP).
5. The recall ratio is as follows: the correctly retrieved items (TP) account for all the items (TP + FN) that should be retrieved.
6. F1 score value: harmonic means of exact value and recall.
7. MLP (Multi-Layer Perceptin): the fully-connected neural network comprises an input layer, a hidden layer and an output layer. The features of the input will be connected to the neurons of the hidden layer, which in turn are connected to the neurons of the input layer. In the MLP network, the layers are fully connected (fully connected means that there is a connection between any one neuron in the previous layer and all neurons in the next layer).
8. LSTM (Long short term memory): a long-short term memory network is a time-recursive neural network suitable for processing and predicting events with relatively long intervals and delays in a time series.
9. HSDNN (Hierarchical threaded deep neural network): the application provides a deep neural network of range upon range of concatenation. HSDNN consists of a plurality of fully-connected neural networks MLP. Compared with the traditional fully-connected neural network, the HSDNN is divided into a plurality of fully-connected neural networks MLPs, and hidden layers between all the fully-connected neural networks MLPs are different and are not communicated with each other. In the splicing layer, two or more independent fully-connected neural networks MLP are spliced into one large fully-connected neural network MLP.
10. WLSTM (Window threaded Deep Neural Network) is an LSTM Network structure with a variable window.
11. Time series: and a group of data point sequences arranged according to the chronological order. Typically, the time intervals of a set of time series are constant values (e.g., 1 minute, 5 minutes, etc.). Here, the time series refers to a time series of the monitoring class. For example, reporting a monitoring data point every minute, and connecting every minute is a time series. The data points are stored in the system as a pair of (time, value), the value may be an index value, and the time may be a time for acquiring the index value. Pulling the data points for the corresponding time range is a set of values. The original time series includes a target data point and historical data points preceding the target data point. The target data point comprises an index value reported at the time point to be measured in the original time sequence. The historical data points comprise index value sequences which are arranged in the original time sequence according to the reporting time sequence and are reported at time points before the time point to be measured. The index can be used for representing performance parameters of the information technology system, and can comprise user access amount, query request amount, query success amount, CPU utilization rate, storage utilization rate, network resource utilization rate and other indexes.
12. Random Forest (Random Forest): a classifier includes a plurality of decision trees and its output class is determined by the mode of the class of the individual tree output. Where a decision tree refers to a tree-structured predictive model.
Fig. 1 is a schematic diagram of anapplication scenario 100 of a data anomaly detection method according to an embodiment. Referring to fig. 1, the application scenario includes a data reporting device 110 and ananomaly detection device 120 connected via a network. The data reporting device 110 is a device for reporting data points, and theanomaly detecting device 120 is a device for performing anomaly detection processing on the reported data points. Both the data reporting device 110 and theabnormality detecting device 120 may be terminals or servers. The terminal may be a smart television, a desktop computer or a mobile terminal. In particular, the mobile terminal may be at least one of a smartphone, a tablet, a laptop, a personal digital assistant, a wearable device, and the like. The server may be implemented as a stand-alone server or as a server cluster of multiple physical servers. The number of data reporting devices 110 may be one or more. For example, the plurality of terminals report respective data to theabnormality detection device 120. The data reporting device 110 may report data points to theanomaly detection device 120 periodically at certain time intervals. Theanomaly detection device 120 can acquire a time sequence that includes a target data point and historical data points reported before the target point. The target data points and the historical data points are arranged according to the reporting time sequence.
FIGS. 2a-2c respectively illustrate a user interface for alerting of a time series of anomalies, according to one embodiment of the invention. In the time series abnormity detection scheme based on deep learning provided by the invention, the deep learning model can automatically carry out intelligent time series abnormity detection according to input data, and output a label of whether the time series abnormity is abnormal or not. The scheme of the invention can be applied to the abnormal detection and intelligent alarm functions of time series in various services (such as cloud-based services, instant messaging software, personal space web pages and the like). When an abnormality occurs in the time series, the abnormality is detected quickly and timely and notified to the corresponding person in charge in the form of an interface shown in fig. 2a-2b through an alarm function. Fig. 2a shows a schematic diagram of an alarm in the form of a short message to a user at a user terminal (e.g., a smartphone, a tablet, etc.) in response to detecting a timing anomaly. Here, the alarm message may include an alarm object, an item to which the alarm object belongs, a region, an account ID, an alarm policy, a trigger time, and the like. Figure 2b shows a schematic diagram of an alert being issued to a user in the form of an instant message (e.g. a QQ message) in the communications software at the user terminal in response to the detection of a timing anomaly. Here, the alarm message may include a time point at which the abnormality occurs, a timing curve, a link of the abnormality report, and the like. Figure 2c shows a schematic diagram of an alert being issued to a user at a user terminal in the form of a message in an applet (e.g. a WeChat applet) in response to detecting a timing anomaly. Here, the alarm message may include timing curves, statistical maps, and the like of different service data for different times (e.g., a week before, a day). As will be appreciated by those skilled in the art, the content of the alert message may include alerts for other items, and may also include other alert forms.
FIG. 3 illustrates a flow diagram of an anomalyindex detection method 300 according to one embodiment. Instep 301, an anomaly detection device (which may be a terminal or a server) acquires an original time series, which may include a target data point and historical data points preceding the target data point. The target data point comprises an index value reported at the time point to be measured in the original time sequence, and the index value can be one or more. The historical data point comprises one or more index values (namely, an index value sequence) which are reported at a time point before the time point to be measured and are arranged according to the reporting time sequence in the original time sequence. To more clearly explain the selection of time series data points, the following description is made in conjunction with fig. 4.
A diagram 400 of raw time series extraction according to one embodiment is shown in fig. 4. The abscissa in fig. 4 represents the acquisition time of the time-series data, and the ordinate represents the extraction period of the time-series data. Here, three parts of time series data are respectively selected to constitute an original time series. Here, a window size of 180 minutes with a sampling interval of 1 minute (i.e., a time granularity of 1 minute) is taken as an example. The first portion of time series data includes thecurrent time point 403 to be measured for today and data points within a certain window size before thecurrent time point 403. Here, a total of 181 data points (including the value of the current time point 403) were acquired for today. The second part of the time series data includes data points within the specific window size before and after the one-day-ahead time point 402 corresponding to thecurrent time point 403 to be measured on a comparable basis. A total of 361 data points were collected for the one day time window. The third part of the time series data includes data points in the specific window size before and after thetime point 401 one week before corresponding to thecurrent time point 403 to be measured. A total of 361 data points were collected for a one week old time window. Thus, in the case of a window size of 180 minutes and a sampling interval of 1 minute, the original time series consists of 903 points in total: 181 data points today, 361 data points one day ago and 361 data points one week ago.
In step 302, the original time series is input into an anomaly detection model for processing, so as to obtain an anomaly detection result for the target data point. And then, determining whether the index reported by the time point to be detected is abnormal or not according to the abnormal detection result of the target data point. Specifically, the index reported by the time point to be detected is determined to be abnormal in response to the abnormal detection result display of the target data point; and responding to the abnormal detection result of the target data point to display that the reported index of the time point to be detected is normally determined. For example, when it is detected that a data point corresponding to the user access amount of the WeChat applet on a certain day is abnormal, it is determined that the access amount of the WeChat applet on the certain day is abnormal. Here, the abnormality detection model is trained by deep learning. The anomaly detection model herein includes at least HSDNN or WLSTM models, which will be described in detail later.
Before training an anomaly detection model (i.e. machine learning models such as HSDNN and WLSTM), a labeled sample data set needs to be prepared first. A flow diagram of amethod 500 of sample data annotation is shown in fig. 5. The data in the scheme is derived from massive time series operation and maintenance data 601 in various services (for example, cloud-based services (such as Tencent cloud), interactive application data (such as QQ application), personal space web pages (such as QQ space and Qzone)).
First, at 502, positive sample filtering is performed through statistical, unsupervised, and other methods, so as to primarily screen out suspicious sample data, and the positive samples at the screening are stored in the positivesample data set 505. In one embodiment, the preliminary decision method comprises a statistical decision method. The statistical decision method may include: extracting historical data points from the original time series; determining the mean value and the standard deviation of historical data points by a statistical decision method; determining a numerical value interval meeting the random error according to the mean value and the error; in response to the target data point being outside the range of values, the original time series is identified as anomalous. Specifically, the computer device may extract historical data points other than the target data point from the time series, and determine a mean and a standard deviation of the historical data points by a statistical decision method, that is, a spherical mean and a standard deviation of the extracted historical data points. In one embodiment, the statistical decision algorithm comprises three sigma laws (three sigma rules of thumb). The three-sigma law, also known as the rale law, first assumes that a set of test data contains only random errors, calculates them to obtain a standard deviation, and then determines an interval according to a certain probability, and considers that the value exceeding the interval is not random errors but gross errors. The three sigma law specifically includes: the values are distributed in
Figure 231727DEST_PATH_IMAGE008
) The probability of (2) is 0.6827; the values are distributed in
Figure 73781DEST_PATH_IMAGE010
) The probability of (1) is 0.9545; the values are distributed in
Figure 468991DEST_PATH_IMAGE012
) The probability of (1) is 0.9973. Where σ represents the standard deviation and μ represents the mean. It should be understood that the mean is the mean of historical time points in the time series, and the standard deviation is the standard deviation of historical data points in the time series.
In another embodiment, the preliminary decision method comprises an unsupervised method. The primary anomaly identification of the original time series through primary decision comprises the following steps: extracting each data point in the original time sequence; classifying the extracted data points through an unsupervised algorithm to obtain a classification result; and performing abnormity judgment on the time sequence based on the classification result.
Specifically, the computer device can substitute the unmarked training samples into the formula of the unsupervised algorithm through a pre-unsupervised algorithm to perform unsupervised machine learning training, and adjust the parameters of the formula in the training process to optimize the algorithm. The computer device may extract each data point in the time series, with the understanding that the extracted data points include a target data point and a historical data point. The computer equipment can substitute the extracted data points into the formula of the unsupervised algorithm after the parameters are adjusted to calculate, so that each data point is classified to obtain a classification result. The computer device can perform exception judgment processing on the time sequence according to the classification result.
Unsupervised algorithms include at least one of a Recurrent Neural Network (RNN), an isolated Forest algorithm (Isolation Forest), a class of Support vector machines (onelasssvm, oneClass Support vector machine), an exponential Weighted Moving Average algorithm (EWMA, explicit Weighted Moving Average), and the like.
Among them, a Recurrent Neural Network (RNN) is a type of Neural Network algorithm for processing sequence data. The essential feature is that there are both internal feedback and connections between the processing units.
An isolated Forest (Isolation Forest) is a fast anomaly detection method based on Ensemble learning (Ensemble), has linear time complexity and high accuracy, and is an algorithm meeting the requirement of big data processing.
One class of Support Vector machines (OneClass svm, oneClass Support Vector Machine) is a classifier obtained by performing unsupervised training using training samples of only one class, and the trained classifier judges all other samples not belonging to the class as "not yes", rather than returning a "not yes" result due to belonging to another class.
The Exponentially Weighted Moving Average algorithm (EWMA), is a special Weighted Moving Average method.
It will be appreciated that different unsupervised algorithms will yield different classification results.
In one embodiment, when the unsupervised algorithm is a recurrent neural network algorithm, the classification result of whether the target data point is abnormal or not can be directly output, and it can be understood that the time series can be subjected to abnormality judgment processing according to the classification result representing the target data point to obtain an abnormality judgment result representing whether the time series is suspected to be abnormal or not.
In one embodiment, when the unsupervised algorithm is an isolated forest, the classification result includes an average path length of a leaf node where the target data point is located in a tree of the isolated forest. Then, when the average path length is less than or equal to the preset threshold, it may be determined that the time series is suspected to be abnormal. Otherwise, when the average path length is greater than the preset threshold, it can be determined that the time sequence is normal.
In one embodiment, when the unsupervised algorithm is a support vector machine algorithm, the classification result indicates whether the target data point belongs to a normal category, when the target data point does not belong to the normal category, it can be determined that the time series is suspected to be abnormal, and when the target data point belongs to the normal category, it can be determined that the time series is normal.
In one embodiment, when the unsupervised algorithm is an exponential weighted moving average algorithm, the computer device may smooth the time series through the exponential weighted moving average algorithm, and determine whether the target data point is within a random error range by using a statistical analysis algorithm with respect to the smoothed time series, and if so, determine that the time series is normal, and if not, determine that the time series is suspected to be abnormal.
In the embodiment, the time sequence is subjected to the anomaly judgment processing through the unsupervised algorithm, and the unsupervised algorithm is combined with the anomaly detection model obtained through supervised learning, so that the multi-level anomaly detection processing is realized, and the accuracy of anomaly detection is improved.
In one embodiment, the unsupervised algorithm is plural. The method further comprises the following steps: obtaining an abnormal judgment result corresponding to each unsupervised algorithm; performing combined detection processing according to the abnormal judgment results corresponding to the unsupervised algorithms; and when the result of the combined detection processing shows that the time series is abnormal, judging that the time series is suspected to be abnormal.
In one embodiment, the performing the joint detection processing according to the anomaly decision result corresponding to each unsupervised algorithm includes: and when the abnormity judgment result corresponding to any unsupervised algorithm represents that the time sequence is abnormal, judging that the time sequence is suspected to be abnormal. It can be understood that, since each unsupervised algorithm has its own disadvantages, and the abnormality decision results obtained by each unsupervised algorithm may have the situations of imperfection and undetected abnormality, the abnormality decision results corresponding to each unsupervised algorithm are jointly decided, and when the abnormality decision result corresponding to any unsupervised algorithm indicates that the time series is abnormal, it is determined that the time series is suspected to be abnormal. Namely, the anomaly judgment results of all unsupervised algorithms are comprehensively considered, so that the primary anomaly identification of the time series is more accurate.
In one embodiment, the performing the joint detection processing according to the anomaly decision result corresponding to each unsupervised algorithm includes: and determining preset weights corresponding to the unsupervised algorithms, and determining a joint detection processing result according to the abnormal judgment result corresponding to each unsupervised algorithm and the corresponding preset weights.
The abnormal judgment result corresponding to each unsupervised algorithm comprises any one of the abnormal time sequence or the normal time sequence. The computer can determine a first proportion of the abnormal judgment results of the abnormal time sequence and a second proportion of the abnormal judgment results of the normal time sequence according to the weight of each unsupervised algorithm and the corresponding abnormal judgment results, compares the first proportion and the second proportion, and takes the abnormal judgment results corresponding to larger values as the results of the joint detection processing.
It can be understood that when the first proportion of the abnormal judgment result of the time series abnormality is greater than the second proportion of the abnormal judgment result of the time series abnormality, the time series abnormality is taken as a joint detection processing result. And otherwise, when the first proportion of the abnormal judgment result of the abnormal time sequence is smaller than the second proportion of the abnormal judgment result of the normal time sequence, the normal time sequence is taken as the result of the joint detection processing.
For ease of understanding, this is now exemplified. For example, there are 3 unsupervised algorithms a, B, and C, the corresponding preset weights are 0.4, and 0.2, respectively, the abnormality determination result obtained by the unsupervised algorithm a is time series abnormality, the abnormality determination result obtained by the unsupervised algorithm B is time series abnormality, the abnormality determination result obtained by the unsupervised algorithm C is time series normality, the first percentage of the abnormality determination result of time series abnormality is 0.8, and the second percentage of the abnormality determination result of time series normality is 0.2. Then the computer device may treat the time series anomaly as a result of the joint detection process.
It can be understood that the result of the joint detection processing is determined according to the abnormality judgment result corresponding to each unsupervised algorithm and the corresponding preset weight, and the abnormality judgment result of each unsupervised algorithm is comprehensively and reasonably considered, so that the primary abnormality identification of the time series can be more accurate.
The computer device may determine whether the time series is suspected to be abnormal according to a result of the joint detection processing. And when the result of the joint detection processing indicates that the time sequence is abnormal, the computer equipment judges that the time sequence is suspected to be abnormal. Further, when the result of the joint detection processing indicates that the time series is normal, the computer device may determine that the time series is normal.
It is noted that the computer device may combine a statistical decision algorithm with at least one unsupervised algorithm for primary anomaly identification for the time series.
In one embodiment, the computer device may perform anomaly identification on the time series through a statistical decision algorithm at a first level, perform joint detection processing on the time series through a plurality of unsupervised algorithms at a second level after the suspected anomaly of the time series is identified, perform feature extraction on the time series at a third level after the suspected anomaly of the time series is determined through the joint detection, input the extracted feature data into an anomaly detection model obtained through supervised machine learning training for further detection, and invoke an anomaly processing strategy when the anomaly detection model outputs an anomaly detection result that a target data point is abnormal.
After the positive samples are filtered and stored in the positive sample data set 605 instep 502, and the suspicious samples are input into an annotation platform (for example, an annotation platform described below), instep 503, the samples are manually annotated by the operation and maintenance personnel through the annotation platform, and the negative samples obtained after manual annotation are stored in the negativesample data set 504, and the positive samples obtained after manual annotation are stored in the positivesample data set 505.
FIG. 6 shows a schematic diagram of a time series annotation platform. The manual labeling work on the data samples needs to be completed by a labeling tool as shown in fig. 6. As shown in the following figure, the system provides not only the current time profile, but also the time series profile of the day and week before as a reference for the samples that need to be labeled. And the operation and maintenance personnel can label the sample on the labeling platform through comparison and judgment, and click the corresponding operation button to label the sample as a positive sample or a negative sample. The labeling platform is used for comparing and referring data at present, before a day and before a week, so that a labeling person can be concentrated in the labeling work of an abnormal sample, the efficiency of time series labeling is effectively improved, and a large amount of labeling data are obtained to provide for the training and testing of a model.
FIG. 7 shows a flow diagram of amethod 700 for off-line training and on-line testing of a deep learning model, according to an embodiment of the invention. In offline model training, the labeled sample data set (including the positive sample set and the negative sample set) 701 above is first subjected tosimple preprocessing 702. In one embodiment, the data size of the total sample is 232818 sets of sample data. The data set is divided into a training data set and a test data set. The training data set includes 151606 sets of sample data, while the testing data set includes 81212 sets of sample data. The training dataset included 102675 sets of negative samples and 48931 sets of positive samples; and 7330 sets of negative examples and 73882 sets of positive examples are included in the test dataset. The preprocessing mainly comprises the step of carrying out maximum and minimum normalization on the sample data, namely unifying the time series in a range of [0, 1] as the input of the deep learning model. Two schemes, HSDNN and WLSTM, are mainly proposed herein for the deep learning model. Instep 703, the preprocessed data is input into a deep learning model, and the time series data is learned by the deep learning model, and instep 704, a flag indicating whether the time series data is abnormal is output. And then, adjusting parameters in the deep learning model based on the label of whether the time series data is abnormal or not.
When the deep learning model is used for online detection: instep 706, the real-time series data to be detected is obtained first. Instep 707, positive sample filtering is performed using the statistical, unsupervised, etc. methods described above. A specific implementation of the statistical, unsupervised approach is described above in relation to fig. 5. Instep 708, the screened suspicious sample is preprocessed. The preprocessing mainly comprises the maximum and minimum normalization of the sample data, namely, the time series is unified to be in a range of [0, 1 ]. Instep 709, the deep learning model trained offline is loaded to perform anomaly detection on the preprocessed time series data. Instep 710, an abnormality sample output through abnormality detection by the deep learning model is output. Thereafter, the data set used for off-line training of the deep learning model is updated with the output abnormal samples.
In one embodiment, the anomaly detection model is trained by deep learning comprising: i. acquiring a sample time sequence and a mark corresponding to the sample time sequence; inputting the sample time sequence and the mark into an abnormality detection model, and processing the sample time sequence and the mark by the abnormality detection model to obtain an abnormality detection result aiming at the target data point; adjusting the anomaly detection model based on the anomaly detection result and the marker; iterating the steps i-iii for M times, wherein M is a preset iteration number.
In one embodiment, the anomaly detection model may be a network structure of a stacked-splice fully-connected neural network HSDNN. Fig. 8 shows a schematic diagram of anetwork structure 800 of a stacked-tiled fully-connected neural network HSDNN. HSDNN is a network structure formed by splicing a plurality of fully-connected neural network structures, and each crossed icon shown in fig. 8 represents a fully-connected neural network structure. HSDNN differs from conventional fully-connected neural networks in that HSDNN includes many massive fully-connected neural networks MLP, such as the hidden layers between each of the fully-connected neural network MLP blocks in fig. 8 are different and not in communication with each other. In the splicing layer (see the shaded part in the figure) in fig. 8, two or more independent fully-connected neural networks are spliced into a larger fully-connected neural network, and a local fully-connected network structure is formed. This involves a number of blocky fully-connected neural networks, referred to herein as stacked-tiled fully-connected neural networks HSDNN. The stacked and spliced fully-connected neural network HSDNN comprises a plurality of parallel processing channels, and compared with the traditional fully-connected neural network MLP, the stacked and spliced fully-connected neural network HSDNN has the advantages that: through the splicing of a plurality of parallel processing channels (window change, down sampling and segmented aggregation), the model can better acquire the overall characteristics and the local characteristics of a time sequence, and has higher recall rate, accuracy and F1 point value.
Innetwork 800, HSDNN includes 3 data entry modules: awindow transform module 802, adownsampling module 803, and asegment aggregation module 804. Thewindow transform module 802,downsampling module 803, and segmentation andaggregation module 804 respectively form three parallel processing channels: a first channel 806, asecond channel 807, and athird channel 808. The first channel 806 is configured to perform windowing on the original time series with different window sizes to generate a plurality of windowed time series having different data lengths, and perform first fully-connected neural network processing on the plurality of windowed time series. Here, the window sizes are selected to be 10 minutes inblock 8021, 30 minutes inblock 8022, 60 minutes inblock 8023, and 180 minutes inblock 8024, respectively, which are the same as the original time sequence. Thesecond channel 807 is configured to perform downsampling of the original time series with different sampling intervals to generate a plurality of downsampled time series having different time resolutions, and perform a second fully-connected neural network processing on the plurality of downsampled time series. The sampling intervals are here chosen to be 18 minutes inblock 8031, 6 minutes inblock 8032, 3 minutes inblock 8033, and 1 minute inblock 8034, respectively, which is the same as the original time sequence. Thethird channel 808 is configured to determine a mean of a plurality of segments of the original time series to generate a mean time series, and perform a third fully-connected neural network processing on the plurality of mean time series. Here, by averaging the points in every 30 minutes in the original time series, 803 points in the original time series become 31 points (7 points today (the point containing the current time), 12 points before one day, 12 points before one week) by segmentation and aggregation. The outputs of the multiple parallel processing channels are stitched 805, andSoftmax 2 classification is performed on the stitched outputs.
In one embodiment, the following steps are performed for the first channel and the second channel. Taking the first channel as an example, the plurality of windowed time-series includes N windowed time-series, N being an integer greater than or equal to 2, and wherein the first fully-connected neural network processing includes:
1) Inputting the N windowed time sequences into respective corresponding fully-connected neural network to obtain corresponding N fully-connected neural network outputs;
2) Splicing the ith fully-connected neural network output and the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output, wherein i is an integer variable, and the initial value of i is 1;
the following loop is performed until i equals N:
inputting the ith spliced output into an intermediate fully-connected neural network to obtain an intermediate fully-connected neural network output;
increasing the value of i progressively;
splicing the intermediate fully-connected neural network output with the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output; and is provided with
3) An intermediate fully-connected neural network output is provided as an output of the first channel.
As will be appreciated by those skilled in the art, the number of channels (e.g., the upsampling channel may be increased), the number of layers spliced in each channel, the window size, the sampling interval, the length of the segments in the segment aggregation, etc. may be adjusted as appropriate.
In one embodiment, the anomaly detection model may be a windowed long-term memory network WLSTM network. Fig. 9 shows a schematic diagram of the structure of a windowed long short term memorynetwork WLSTM network 900. An LSTM long-short term memory network is a time-recursive neural network with a chain form of repeating neural network modules that is suitable for processing and predicting relatively long-spaced and delayed events in a time series. Compared with a single-layer neural network (tanh layer) of repeating modules of the recurrent neural network RNN, the repeating modules (cells) in the chain structure of the LSTM have four layers of interactive neural network layers. The core concept of LSTM is the cellular state and the "gate" structure. The cell state corresponds to the path of information transmission, allowing information to be passed on in a sequence. The cell state can convey relevant information during the sequence processing. Thus, even information from earlier time steps can be carried to cells in later time steps, overcoming the effects of short-term memory. The addition and removal of information is accomplished through a "gate" structure that learns which information to save or forget during the training process.
TheWLSTM network 900 includes a plurality of long short term memory LSTM units in series. In one embodiment, theWLSTM network 900 includes two or more long-short term memory cells connected in series, the two or more long-short term memory cells including a first long-short term memory cell through a Qth long-short term memory cell, Q being an integer greater than or equal to 2, the two or more long-short term memory cells configured to perform the following steps.
The first stage is as follows: the first long short term memory unit is configured to perform windowing on the original time series using a first window size to generate a first windowed time series having a first data length, and perform long short term memory processing on the first windowed time series resulting in a first long short term memory output. And a second stage: the Pth long-short term memory unit is configured to perform windowing on the original time series by using a Pth window size to generate a Pth windowed time series having a Pth data length, to concatenate the P-1 th long-short term memory output and the Pth windowed time series, and to perform long-short term memory processing on the concatenated series to obtain a Pth long-short term memory output, where P is initially 2, P is an integer greater than or equal to 2 and less than or equal to Q. Repeating the steps in the second stage until P equals to Q, and obtaining the Q-th long-short term memory output; softmax 2 classification is performed on the qth long-short term memory output.
In another embodiment, the plurality of long-short term memory LSTM units includes 4 LSTM units. A first LSTM cell, a second LSTM cell, a third LSTM cell and a fourth LSTM cell. The first LSTM unit is configured to perform windowing on the original time series with a first window size 901 (where the first window size is 7 minutes) to generate a first windowed time series having a first data length, and perform LSTM processing on the first windowed time series; resulting in a first LSTM output. The second LSTM unit is configured to perform windowing on the original time series using a second window size 902 (where the first window size is 20 minutes) to generate a second windowed time series having a second data length, to splice the first LSTM output with the second windowed time series, and to perform LSTM processing on the spliced series to obtain a second LSTM output. The third LSTM unit is configured to perform windowing on the original time series using a third window size 903 (where the first window size is 60 minutes) to generate a third windowed time series having a third data length, to splice the second LSTM output with the third windowed time series, and to perform LSTM processing on the spliced sequence to obtain a third LSTM output. The fourth LSTM unit is configured to perform windowing on the original time series with a fourth window size 904 (where the fourth window size is 180 minutes, the same as the original time series) to generate a fourth windowed time series having a fourth data length, resulting in a fourth LSTM output.Softmax 2 classification is performed on the spliced fourth LSTM output, outputting a probability of being abnormal or normal. By this adjustment of the size of the time window, local features and global features in the original time sequence are more easily obtained.
FIG. 10 is a schematic diagram of anapparatus 1000 for off-line training and on-line testing of a deep learning model according to an embodiment of the present invention. Theapparatus 1000 includes at least an acquisition module 1001 and an anomaly detection module 1002. The obtaining module 1001 is configured to obtain an original time series, where the original time series includes a target data point and a historical data point before the target data point, the target data point includes an index value reported at a time point to be measured in the original time series, and the historical data point includes an index value series reported at a time point before the time point to be measured, which is arranged in the original time series according to a reporting time sequence. The anomaly detection module 1002 is configured to input the original time sequence into an anomaly detection model, the anomaly detection model processes the original time sequence to obtain an anomaly detection result for the target data point, and determines whether an index reported at a time point to be detected is abnormal according to the anomaly detection result of the target data point, where the anomaly detection model is obtained by deep learning and training.
For two deep learning models provided by the invention, based on 151606 sets of training data sets and 81212 sets of testing data sets, we can obtain available WLSTM and HSDNN deep models through refined parameters. As shown in the table below, the WLSTM model has a recall rate of 89.78%, an accuracy rate of 94.93% and an F1 score of 92.28%. On the other hand, the recall rate of the HSDNN model can reach 93.06%, the accuracy rate can reach 95.18%, and the F1 score value can reach 93.06%.
Compare Dapeng Liu et al "oppentice: the recall rate and accuracy rate of the deep learning based solution are greatly improved by the Opprenotice solution (accuracy rate 83% in the case of recall greater than 66%) in the aforementioned actual and automatic detection of the Internet protocol preference, ACM, 2015.
Model nameRecall rateRate of accuracyValue of F1 fraction
HSDNN91.04%95.18%93.06%
WLSTM89.78%94.93%92.28%
Opprence schemeOver 66 percent83%73.53%
Table 1 test data for the deep learning model set forth the results.
And extracting the time sequence to be detected from the data source in real time. The time sequence to be detected is firstly filtered through a positive sample by statistics and an unsupervised algorithm, and then suspicious time sequence data is output and transmitted to a preprocessing layer for minimum and maximum normalization preprocessing. And then, loading the offline trained deep learning model for anomaly detection. And if the detected sample is a negative sample (abnormal), carrying out corresponding alarm sending. Meanwhile, the output abnormal sample and the label can be fed back to a sample library trained by the offline model to be used as the optimization of the offline model.
As described above, the technical solution proposed by the present invention has low maintenance cost and low labor cost: the conventional manual threshold setting method requires a lot of manpower to maintain the setting of the threshold. The machine learning method based on the feature engineering requires experts to mine a large number of features according to different business data. The time series abnormity detection model based on deep learning avoids setting of threshold values and characteristic engineering, and is low in maintenance cost and labor cost.
The technical scheme provided by the invention has high accuracy: by collecting and labeling data for overseas operation and maintenance. Time series are widely present in the field of monitoring of various services. Especially in the field of operation and maintenance monitoring, all service indexes are reported in a time series mode. In particular, the time series may come from the field of operation and maintenance monitoring of internet enterprises. All the service indexes can be reported in a time sequence mode, and a monitoring system monitors the time sequence. The abnormity of the time sequence can reflect the problem of the corresponding service index. The corresponding business indicators may be: for example, even the amount of warning transmission, the amount of memory usage, etc. in the communication software. And the abnormal detection result indicates that the corresponding service index exceeds a preset threshold value for the original time sequence abnormal indication. A large amount of data can be used for training and testing a deep learning model, so that good model performance is obtained; in addition, the accuracy of the model can be increased with the increase of the training data volume, which is also a very important advantage of the deep learning model. The technical scheme provided by the invention has high recall rate (wide coverage area) and can detect various types of abnormalities. The abnormal coverage rate is obviously higher than that of a characteristic engineering-based learning method, the test recall rate is about 90%, and the accuracy rate is stabilized to be more than 95%. The technical scheme provided by the invention is easy to expand: only data format matching is needed, iterative incremental learning can be performed on the existing model, and other application scenarios are easy to expand.
FIG. 11 shows a schematic diagram of anexample computer device 1100 for anomaly indicator detection.Computing device 1100 can be a variety of different types of devices, such as server computers, devices associated with clients (e.g., client devices), systems on a chip, and/or any other suitable computing device or computing system.
Thecomputing device 1100 may include at least oneprocessor 1102, memory 1104, communication interface(s) 1106,display device 1108, other input/output (I/O) devices 1110, and one or moremass storage devices 1112, which may be capable of communicating with each other, such as through a system bus 1114 or other appropriate connection.
Theprocessor 1102 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. Theprocessor 1102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, theprocessor 1102 may be configured to retrieve and execute computer readable instructions stored in the memory 1104, themass storage device 1112, or other computer readable media, such as program code of an operating system 1116, program code of an application 1118, program code ofother programs 1120, etc., to implement the methods for anomaly indicator detection provided by embodiments of the present invention.
Memory 1104 andmass storage device 1112 are examples of computer storage media for storing instructions that are executed byprocessor 1102 to carry out the various functions described above. By way of example, memory 1104 may generally include both volatile and nonvolatile memory (e.g., RAM, ROM, and the like). In addition,mass storage device 1112 may generally include a hard disk drive, solid state drive, removable media including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Memory 1104 andmass storage device 1112 may both be referred to herein collectively as memory or computer storage media and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed byprocessor 1102 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of program modules can be stored on themass storage device 1112. These programs include an operating system 1116, one or more application programs 1118,other programs 1120, andprogram data 1122, and they can be loaded into memory 1104 for execution. Examples of such applications or program modules may include, for instance, computer program logic (e.g., computer program code or instructions) to implement the following components/functions: an acquisition module 1001, anomaly detection 1002, and/or further embodiments described herein.
Although illustrated in fig. 11 as being stored in memory 1104 ofcomputing device 1100,modules 1116, 1118, 1120, and 1122, or portions thereof, may be implemented using any form of computer-readable media that is accessible bycomputing device 1100. As used herein, "computer-readable media" includes at least two types of computer-readable media, namely computer storage media and communication media.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism. Computer storage media, as defined herein, does not include communication media.
Computing device 1100 may also include one ormore communication interfaces 1106 for exchanging data with other devices, such as over a network, direct connection, or the like, as previously discussed. The one ormore communication interfaces 1106 can facilitate communication within a variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet, and so forth. Thecommunication interface 1106 may also provide for communication with external storage devices (not shown), such as in a storage array, network attached storage, storage area network, or the like.
In some examples, adisplay device 1108, such as a monitor, may be included for displaying information and images. Other I/O devices 1110 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so forth.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, the indefinite article "a" or "an" does not exclude a plurality, and "a plurality" means two or more. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (19)

1. An abnormal index detection method includes:
acquiring an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be detected in the original time sequence, and the historical data points comprise an index value sequence which is reported at a time point before the time point to be detected and is arranged according to a reporting time sequence in the original time sequence;
inputting the original time sequence into an anomaly detection model, processing the original time sequence by the anomaly detection model to obtain an anomaly detection result aiming at the target data point, and determining whether an index reported at the time point to be detected is abnormal according to the anomaly detection result of the target data point, wherein the anomaly detection model is obtained by deep learning and training;
wherein the anomaly detection model comprises a plurality of parallel processing channels including a first channel, a second channel, and a third channel,
wherein the first channel is configured to perform windowing on the original time series with different window sizes to generate a plurality of windowed time series having different data lengths and perform first fully-connected neural network processing on the plurality of windowed time series,
wherein the second channel is configured to perform downsampling on the original time series with different sampling intervals to generate a plurality of downsampled time series having different time resolutions, and perform second fully-connected neural network processing on the plurality of downsampled time series, and
wherein the third channel is configured to determine a mean of a plurality of segments of the original time series to generate a plurality of mean time series, and perform a third fully-connected neural network processing on the plurality of mean time series;
wherein the anomaly detection model stitches outputs of the plurality of parallel processing channels and performs Softmax 2 classification on the stitched outputs.
2. The method of claim 1, wherein the anomaly detection model is trained by deep learning comprising:
i. acquiring a sample time series and a mark corresponding to the sample time series;
inputting the sample time series and the marker into an anomaly detection model, wherein the anomaly detection model processes the sample time series and the marker to obtain an anomaly detection result for the target data point;
adjusting the anomaly detection model based on the anomaly detection result and the marker; and is
And iv, iterating the steps i-iii for M times, wherein M is a preset iteration number.
3. The method of claim 1, wherein the plurality of windowed time series includes N windowed time series, N being an integer greater than or equal to 2, and wherein the first fully-connected neural network processing includes:
inputting the N windowed time sequences into respective corresponding fully-connected neural networks to obtain corresponding N fully-connected neural network outputs;
splicing the ith fully-connected neural network output and the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output, wherein i is an integer variable, and the initial value of i is 1;
the following loop is performed until i equals N:
inputting the ith splicing output into an intermediate fully-connected neural network to obtain an intermediate fully-connected neural network output;
increasing the value of i progressively;
splicing the intermediate fully-connected neural network output with the (i + 1) th fully-connected neural network output in the N fully-connected neural network outputs to obtain an ith spliced output; and is
Providing the intermediate fully-connected neural network output as an output of the first channel.
4. The method of claim 1, further comprising, prior to said inputting said original time series into an anomaly detection model:
and performing primary anomaly identification on the original time sequence through primary judgment.
5. The method of claim 4, said primary decision comprising a statistical decision method, said primary anomaly identification of said original time series by said primary decision comprising:
extracting the historical data points from the original time series;
determining the mean value and the standard deviation of the historical data points by the statistical decision method;
determining a numerical value interval meeting the random error according to the mean value and the error;
identifying the original time series as anomalous in response to the target data point being outside the range of values.
6. The method of claim 4, said primary decision comprising an unsupervised method, said primary anomaly identification of said original time series by said primary decision comprising:
extracting each data point in the original time sequence;
classifying the extracted data points through an unsupervised algorithm to obtain a classification result;
and carrying out abnormity judgment on the original time sequence based on the classification result.
7. The method of claim 1, further comprising:
and sending an alarm message in response to the abnormity detection result being the abnormity of the original time series.
8. The method of claim 7, wherein the alert message comprises: short message warning message, application program warning message and small program warning message.
9. An abnormal index detection method includes:
acquiring an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be detected in the original time sequence, and the historical data points comprise an index value sequence which is reported at a time point before the time point to be detected and is arranged according to a reporting time sequence in the original time sequence;
inputting the original time sequence into an anomaly detection model, processing the original time sequence by the anomaly detection model to obtain an anomaly detection result for the target data point, and determining whether an index reported at the time point to be detected is abnormal according to the anomaly detection result of the target data point, wherein the anomaly detection model is obtained by deep learning and training;
wherein the anomaly detection model comprises two or more long-short term memory units connected in series, the two or more long-short term memory units comprising a first long-short term memory unit through a qth long-short term memory unit, Q being an integer greater than or equal to 2, the two or more long-short term memory units being configured to perform the steps of:
the first stage is as follows:
the first long short term memory unit is configured to perform windowing on the original time series with a first window size to generate a first windowed time series having a first data length, and perform long short term memory processing on the first windowed time series resulting in a first long short term memory output;
and a second stage:
the Pth long-short term memory unit is configured to perform windowing on the original time sequence by using a Pth window size to generate a Pth windowed time sequence with a Pth data length, splice a P-1 th long-short term memory output and the Pth windowed time sequence, and perform long-short term memory processing on the spliced sequence to obtain a Pth long-short term memory output, wherein the P is an initial integer which is more than or equal to 2 and less than or equal to Q, and the P is an integer which is more than or equal to 2;
repeating the steps in the second stage until P is equal to Q, and obtaining a Q-th long-short term memory output;
wherein the anomaly detection model performs Softmax 2 classification on the Qth long-short term memory output.
10. The method of claim 9, wherein the anomaly detection model is trained by deep learning comprising:
i. acquiring a sample time series and a mark corresponding to the sample time series;
inputting the sample time series and the marker into an anomaly detection model, wherein the anomaly detection model processes the sample time series and the marker to obtain an anomaly detection result aiming at the target data point;
adjusting the anomaly detection model based on the anomaly detection result and the marker; and is provided with
And iv, iterating the steps i-iii for M times, wherein M is a preset iteration number.
11. The method of claim 9, further comprising, prior to said inputting said original time series into an anomaly detection model:
and performing primary anomaly identification on the original time sequence through primary judgment.
12. The method of claim 11, said primary decision comprising a statistical decision method, said primary anomaly identification of said original time series by said primary decision comprising:
extracting the historical data points from the original time series;
determining the mean value and the standard deviation of the historical data points by the statistical decision method;
determining a numerical value interval meeting the random error according to the mean value and the error;
identifying the original time series as anomalous in response to the target data point being outside the range of values.
13. The method of claim 11, said primary decision comprising an unsupervised method, said primary anomaly identification of said original time series by said primary decision comprising:
extracting each data point in the original time sequence;
classifying the extracted data points through an unsupervised algorithm to obtain a classification result;
and carrying out abnormity judgment on the original time sequence based on the classification result.
14. The method of claim 9, further comprising:
and sending an alarm message in response to the abnormity detection result being the abnormity of the original time series.
15. The method of claim 14, wherein the alert message comprises: short message warning message, application program warning message and small program warning message.
16. An abnormal index detection apparatus includes:
the acquisition module is configured to acquire an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be measured in the original time sequence, and the historical data points comprise an index value sequence reported at a time point before the time point to be measured, which is arranged according to a reporting time sequence in the original time sequence;
an anomaly detection module configured to input the original time sequence into an anomaly detection model, wherein the anomaly detection model processes the original time sequence to obtain an anomaly detection result for the target data point, and determines whether the index reported at the time point to be detected is abnormal according to the anomaly detection result for the target data point,
wherein the anomaly detection model comprises a plurality of parallel processing channels including a first channel, a second channel, and a third channel,
wherein the first channel is configured to perform windowing on the original time series with different window sizes to generate a plurality of windowed time series having different data lengths and perform first fully-connected neural network processing on the plurality of windowed time series,
wherein the second channel is configured to perform downsampling on the original time series with different sampling intervals to generate a plurality of downsampled time series having different time resolutions, and perform second fully-connected neural network processing on the plurality of downsampled time series, and
wherein the third channel is configured to determine a mean of a plurality of segments of the original time series to generate a plurality of mean time series, and perform a third fully-connected neural network processing on the plurality of mean time series;
wherein the anomaly detection model stitches outputs of the plurality of parallel processing channels and performs Softmax 2 classification on the stitched outputs.
17. An abnormal index detection apparatus comprising:
the acquisition module is configured to acquire an original time sequence, wherein the original time sequence comprises a target data point and historical data points before the target data point, the target data point comprises an index value reported at a time point to be measured in the original time sequence, and the historical data points comprise an index value sequence which is reported at a time point before the time point to be measured and is arranged according to a reporting time sequence in the original time sequence;
an anomaly detection module configured to input the original time sequence into an anomaly detection model, wherein the anomaly detection model processes the original time sequence to obtain an anomaly detection result for the target data point, and determines whether the index reported at the time point to be detected is abnormal according to the anomaly detection result for the target data point,
wherein the anomaly detection model comprises two or more long-short term memory units connected in series, the two or more long-short term memory units comprising a first long-short term memory unit through a Qth long-short term memory unit, Q being an integer greater than or equal to 2, the two or more long-short term memory units being configured to perform the steps of:
the first stage is as follows:
the first long short term memory unit is configured to perform windowing on the original time series with a first window size to generate a first windowed time series having a first data length, and perform long short term memory processing on the first windowed time series resulting in a first long short term memory output;
and a second stage:
the Pth long-short term memory unit is configured to perform windowing on the original time sequence by using a Pth window size to generate a Pth windowed time sequence with a Pth data length, splice a P-1 th long-short term memory output and the Pth windowed time sequence, and perform long-short term memory processing on the spliced sequence to obtain a Pth long-short term memory output, wherein the P is an initial integer which is more than or equal to 2 and less than or equal to Q, and the P is an integer which is more than or equal to 2;
repeating the steps in the second stage until P is equal to Q, and obtaining a Q-th long-short term memory output;
wherein the anomaly detection model performs Softmax 2 classification on the Qth long-short term memory output.
18. A computer arrangement comprising a memory and a processor, a computer program being stored in the memory, which computer program, when being executed by the processor, causes the processor to carry out the steps of the method of any of the claims 1-15.
19. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1-15.
CN202010041844.6A2020-01-152020-01-15Abnormal index detection method and device, computer equipment and storage mediumActiveCN111178456B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010041844.6ACN111178456B (en)2020-01-152020-01-15Abnormal index detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010041844.6ACN111178456B (en)2020-01-152020-01-15Abnormal index detection method and device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN111178456A CN111178456A (en)2020-05-19
CN111178456Btrue CN111178456B (en)2022-12-13

Family

ID=70656284

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010041844.6AActiveCN111178456B (en)2020-01-152020-01-15Abnormal index detection method and device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN111178456B (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111860897B (en)*2020-08-052024-07-16青岛特来电新能源科技有限公司Abnormality detection method, abnormality detection device, abnormality detection equipment and computer-readable storage medium
CN112001482B (en)*2020-08-142024-05-24佳都科技集团股份有限公司Vibration prediction and model training method, device, computer equipment and storage medium
CN112098876B (en)*2020-08-272024-06-28浙江省邮电工程建设有限公司Abnormality detection method for single battery in storage battery
CN112069359B (en)*2020-09-012024-03-19上海熙菱信息技术有限公司Method for dynamically filtering abnormal data of snapshot object comparison result
CN112257755B (en)*2020-09-242023-07-28北京航天测控技术有限公司Method and device for analyzing running state of spacecraft
CN112329847A (en)*2020-11-032021-02-05北京神州泰岳软件股份有限公司Abnormity detection method and device, electronic equipment and storage medium
CN112286924A (en)*2020-11-202021-01-29中国水利水电科学研究院 A data cleaning technology for data anomaly dynamic identification and multi-pattern self-matching
CN112416662A (en)*2020-11-262021-02-26清华大学Multi-time series data anomaly detection method and device
CN112286951B (en)*2020-11-262025-01-10杭州数梦工场科技有限公司 Data detection method and device
CN112631887A (en)*2020-12-252021-04-09百度在线网络技术(北京)有限公司Abnormality detection method, abnormality detection device, electronic apparatus, and computer-readable storage medium
CN112766342A (en)*2021-01-122021-05-07安徽容知日新科技股份有限公司Abnormity detection method for electrical equipment
CN112910567B (en)*2021-01-252022-07-01北京邮电大学 A method and related equipment for eavesdropping classification monitoring based on recurrent neural network
CN112800061B (en)*2021-01-292024-05-10北京锐安科技有限公司Data storage method, device, server and storage medium
CN112905671A (en)*2021-03-242021-06-04北京必示科技有限公司Time series exception handling method and device, electronic equipment and storage medium
CN112948230A (en)*2021-03-312021-06-11中国建设银行股份有限公司Data processing method and device based on machine room confidential air conditioner
CN113126564B (en)*2021-04-232023-03-21重庆大学Digital twin driven numerical control milling cutter abrasion on-line monitoring method
CN115510097A (en)*2021-06-042022-12-23中国移动通信集团浙江有限公司 Periodic index determination method, device, equipment and computer-readable storage medium
CN113703401B (en)*2021-07-282023-05-09西门子工厂自动化工程有限公司Configuration method and device of anomaly detection algorithm, electronic equipment and storage medium
CN113656452B (en)*2021-07-282024-06-14北京宝兰德软件股份有限公司Method and device for detecting call chain index abnormality, electronic equipment and storage medium
CN113643323B (en)*2021-08-202023-10-03中国矿业大学Target detection system under urban underground comprehensive pipe rack dust fog environment
CN113850418B (en)*2021-09-022024-07-02支付宝(杭州)信息技术有限公司Method and device for detecting abnormal data in time sequence
CN113722199B (en)*2021-09-072024-01-30上海观安信息技术股份有限公司Abnormal behavior detection method, device, computer equipment and storage medium
CN113780238B (en)*2021-09-272024-04-05京东科技信息技术有限公司Abnormality detection method and device for multi-index time sequence signal and electronic equipment
CN113822366A (en)*2021-09-292021-12-21平安医疗健康管理股份有限公司Service index abnormality detection method and device, electronic equipment and storage medium
CN114358106B (en)*2021-09-292025-08-15腾讯科技(深圳)有限公司System abnormality detection method, system abnormality detection device, computer program product and electronic equipment
CN114022313A (en)*2021-11-092022-02-08广东电网有限责任公司江门供电局Commercial building energy consumption detection method and system
CN114139613B (en)*2021-11-182024-11-08支付宝(杭州)信息技术有限公司 Updating method and device of abnormality detection system
CN114186489B (en)*2021-12-022025-05-06中国石油大学(北京) Method, system and device for detecting abnormality in finished oil pipeline based on sorting network
CN114707536A (en)*2022-03-072022-07-05南方科技大学 Elevator abnormality detection method and device, electronic device, storage medium
CN116804957A (en)*2022-03-182023-09-26华为技术有限公司 System monitoring method and device
CN114419528B (en)*2022-04-012022-07-08浙江口碑网络技术有限公司Anomaly identification method and device, computer equipment and computer readable storage medium
CN115277464B (en)*2022-05-132023-06-02清华大学 Method, device and storage medium for cloud network change traffic anomaly detection based on multidimensional time series analysis
CN114881167B (en)*2022-05-242023-06-20北京百度网讯科技有限公司 Anomaly detection method, device, electronic device and medium
CN115063180B (en)*2022-06-302025-03-21成都新潮传媒集团有限公司 Abnormal risk detection method, device and storage medium for monitoring data
CN115098345B (en)*2022-08-252022-12-23广州简悦信息科技有限公司Data anomaly detection method, electronic device and readable storage medium
CN115186772B (en)*2022-09-132023-02-07云智慧(北京)科技有限公司 Method, device and equipment for detecting partial discharge of electric equipment
CN116244139A (en)*2022-12-242023-06-09北京新数科技有限公司Alarm self-healing method and system based on time sequence data
CN118279633A (en)*2022-12-292024-07-02中兴通讯股份有限公司Method and device for self-maintenance of equipment, electronic equipment and storage medium
CN115952465B (en)*2023-03-102023-07-21畅捷通信息技术股份有限公司Sensor data anomaly detection method, device and computer storage medium
CN116546534A (en)*2023-04-262023-08-04齐犇科技集团有限公司Remote ESIM card data air operation and maintenance method
CN116202558B (en)*2023-05-042023-08-01中国西安卫星测控中心CMG rotating part working condition detection method based on incremental data statistics
CN116955092B (en)*2023-09-202024-01-30山东小萌信息科技有限公司Multimedia system monitoring method and system based on data analysis
CN117707824A (en)*2023-12-142024-03-15中国电信股份有限公司Determination method and device for exception handler and nonvolatile storage medium
CN117880055B (en)*2024-03-122024-05-31灵长智能科技(杭州)有限公司Network fault diagnosis method, device, equipment and medium based on transmission layer index
CN119397460A (en)*2025-01-022025-02-07亚信科技(中国)有限公司 Anomaly detection method and related device based on reinforcement learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108063698A (en)*2017-12-152018-05-22东软集团股份有限公司Unit exception detection method and device, program product and storage medium
CN108197845A (en)*2018-02-282018-06-22四川新网银行股份有限公司A kind of monitoring method of the transaction Indexes Abnormality based on deep learning model LSTM
CN109032829A (en)*2018-07-232018-12-18腾讯科技(深圳)有限公司Data exception detection method, device, computer equipment and storage medium
CN109739904A (en)*2018-12-302019-05-10北京城市网邻信息技术有限公司 A time series marking method, device, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108063698A (en)*2017-12-152018-05-22东软集团股份有限公司Unit exception detection method and device, program product and storage medium
CN108197845A (en)*2018-02-282018-06-22四川新网银行股份有限公司A kind of monitoring method of the transaction Indexes Abnormality based on deep learning model LSTM
CN109032829A (en)*2018-07-232018-12-18腾讯科技(深圳)有限公司Data exception detection method, device, computer equipment and storage medium
CN109739904A (en)*2018-12-302019-05-10北京城市网邻信息技术有限公司 A time series marking method, device, device and storage medium

Also Published As

Publication numberPublication date
CN111178456A (en)2020-05-19

Similar Documents

PublicationPublication DateTitle
CN111178456B (en)Abnormal index detection method and device, computer equipment and storage medium
US11631014B2 (en)Computer-based systems configured for detecting, classifying, and visualizing events in large-scale, multivariate and multidimensional datasets and methods of use thereof
Song et al.Identifying performance anomalies in fluctuating cloud environments: A robust correlative-GNN-based explainable approach
US10311044B2 (en)Distributed data variable analysis and hierarchical grouping system
US10311368B2 (en)Analytic system for graphical interpretability of and improvement of machine learning models
US10127477B2 (en)Distributed event prediction and machine learning object recognition system
US20190354583A1 (en)Techniques for determining categorized text
US20190370684A1 (en)System for automatic, simultaneous feature selection and hyperparameter tuning for a machine learning model
US20190258904A1 (en)Analytic system for machine learning prediction model selection
Pavlovski et al.Hierarchical convolutional neural networks for event classification on PMU measurements
CN111368980A (en)State detection method, device, equipment and storage medium
Yuan et al.Learning latent interactions for event classification via graph neural networks and PMU data
CN110852881A (en)Risk account identification method and device, electronic equipment and medium
CN109871002B (en)Concurrent abnormal state identification and positioning system based on tensor label learning
CN111949429A (en)Server fault monitoring method and system based on density clustering algorithm
CN113986674A (en)Method and device for detecting abnormity of time sequence data and electronic equipment
WO2021103401A1 (en)Data object classification method and apparatus, computer device and storage medium
CN118941153A (en) A data link anomaly positioning method, device, electronic device and storage medium
CN114693409A (en)Product matching method, device, computer equipment, storage medium and program product
CN117609911A (en) Abnormal identification method and device for sensing equipment
CN118113503A (en)Intelligent operation and maintenance system fault prediction method, device, equipment and storage medium
CN117131405A (en)Application anomaly detection method, device, equipment and medium
Xiao et al.Operation and maintenance (O&M) for data center: An intelligent anomaly detection approach
Chen et al.Generative adversarial synthetic neighbors-based unsupervised anomaly detection
KabashkinAI and Evolutionary Computation for Intelligent Aviation Health Monitoring.

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp