Movatterモバイル変換


[0]ホーム

URL:


CN112162515B - An Adversarial Attack Method for Process Monitoring System - Google Patents

An Adversarial Attack Method for Process Monitoring System
Download PDF

Info

Publication number
CN112162515B
CN112162515BCN202011080541.1ACN202011080541ACN112162515BCN 112162515 BCN112162515 BCN 112162515BCN 202011080541 ACN202011080541 ACN 202011080541ACN 112162515 BCN112162515 BCN 112162515B
Authority
CN
China
Prior art keywords
process monitoring
sample
monitoring system
subspace
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011080541.1A
Other languages
Chinese (zh)
Other versions
CN112162515A (en
Inventor
葛志强
江肖禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJUfiledCriticalZhejiang University ZJU
Priority to CN202011080541.1ApriorityCriticalpatent/CN112162515B/en
Publication of CN112162515ApublicationCriticalpatent/CN112162515A/en
Application grantedgrantedCritical
Publication of CN112162515BpublicationCriticalpatent/CN112162515B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种针对过程监控系统的对抗攻击方法,该方法通过训练子空间迁移网络来利用工业数据制作对抗样本。子空间网络由扰动生成器,前向自编码器和反向自编码器组成。扰动生成器为原始样本增加额外的扰动,前向自编码器和反向自编码器通过子空间的信息来为扰动提供方向。攻击者可针对观测数据中的数据情况,分别分为情况1和情况2,对过程监控系统进行对抗攻击和数据投毒。本发明针对过程监测模型,提供了一种基于优化方法进行训练的子空间迁移网络,能够产生兼具对抗攻击能力和投毒能力的对抗样本,并对过程监测模型进行对抗攻击。

Figure 202011080541

The invention discloses an adversarial attack method for a process monitoring system. The method uses industrial data to make adversarial samples by training a subspace migration network. The subspace network consists of a perturbation generator, a forward autoencoder and a reverse autoencoder. The perturbation generator adds additional perturbations to the original samples, and the forward and reverse autoencoders provide directions for the perturbations through the information of the subspace. Attackers can be divided into case 1 and case 2 according to the data conditions in the observation data, and conduct adversarial attacks and data poisoning on the process monitoring system. Aiming at the process monitoring model, the invention provides a subspace migration network trained based on the optimization method, which can generate confrontation samples with both the ability to confront attack and poisoning, and conduct confrontation attacks on the process monitoring model.

Figure 202011080541

Description

Translated fromChinese
一种针对过程监控系统的对抗攻击方法An Adversarial Attack Method for Process Monitoring System

技术领域technical field

本发明属于工业信息安全领域,尤其涉及一种针对过程监控系统的对抗攻击方法。The invention belongs to the field of industrial information security, and in particular relates to a counterattack method for a process monitoring system.

背景技术Background technique

过程监控系统是保障工业生产安全的第一道防线,被广泛地应用在各种工业环节。由于故障数据种类多、难获取,数据驱动的过程监控系统通常利用正常数据通过无监督完成建模。它将正常样本映射到一个子空间,并完成重构,这个子空间就是过程监控系统的监测空间。然而故障样本映射到同一个子空间中,它将无法很好地重构,即故障的重构样本和原始样本的平方预测误差(SPE)会非常大。在过程监控系统工作中,一旦询问样本的SPE值超过了预期的控制限,它将会被判断为故障样本。过程监控系统将在发现故障样本的第一时间给出警报,防止故障造成更严重的损失。The process monitoring system is the first line of defense to ensure the safety of industrial production, and is widely used in various industrial links. Due to the variety and difficulty of obtaining fault data, data-driven process monitoring systems usually use normal data to complete modeling without supervision. It maps the normal samples to a subspace and completes the reconstruction. This subspace is the monitoring space of the process monitoring system. However, when the faulty samples are mapped into the same subspace, it will not be reconstructed well, that is, the squared prediction error (SPE) of the faulty reconstructed samples and the original samples will be very large. In the process monitoring system work, once the SPE value of the query sample exceeds the expected control limit, it will be judged as a fault sample. The process monitoring system will give an alarm at the first time a fault sample is found, preventing the fault from causing more serious losses.

过程监控系统主要关注的是生产系统内部的异常情况。然而,随着工业大数据时代的到来,原本封闭的工业信息系统与互联网结合的越来越紧密,变得越来越开放。这也意味着过程工业信息系统受到了来自外部风险的威胁,攻击者可以通过获取和篡改传感器的数据将对过程监控系统进行攻击和数据投毒。一旦过程监控系统失去效果,将对工业系统的安全造成极大的风险。因此,针对过程监控系统的对抗攻击方法必须受到人们的重视,这在过去是未被讨论的。Process monitoring systems are primarily concerned with abnormal conditions within the production system. However, with the advent of the era of industrial big data, the originally closed industrial information system has become more and more closely integrated with the Internet, and has become more and more open. This also means that process industry information systems are threatened by external risks, and attackers can attack and poison process monitoring systems by acquiring and tampering with sensor data. Once the process monitoring system fails, it will pose a great risk to the safety of the industrial system. Therefore, adversarial attack methods against process monitoring systems must receive attention, which has not been discussed in the past.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于针对现有技术的不足,提供一种针对过程监控系统的对抗攻击方法。本发明通过一种新颖的子空间迁移网络来对过程监控系统进行对抗攻击,和对更新过程监控系统进行数据投毒。The purpose of the present invention is to provide a counter-attack method for a process monitoring system in view of the deficiencies of the prior art. The invention uses a novel subspace migration network to conduct counterattacks on the process monitoring system and perform data poisoning on the updating process monitoring system.

本发明的目的是通过以下技术方案来实现的:一种针对过程监控系统的对抗攻击方法,子空间迁移网络通过优化的方式生成同时具备对抗攻击和投毒能力的对抗样本。其中,过程监控系统默认通过子空间重构前后的平方预测误差的统计量来对询问样本进行检测,超过控制限的即为故障。子空间迁移网络包括三个部分,分别是:扰动生成器,前向自编码器和反向自编码器。其中,扰动生成器是一个多层感知机,它为对抗样本所生成扰动的计算公式表示为g(·);前向自编码器和反向自编码器分别是两个隐含层小于输入输出层的降维自编码器。攻击者攻击时有以下两种情况:The purpose of the present invention is achieved through the following technical solutions: an adversarial attack method for a process monitoring system, wherein the subspace migration network generates adversarial samples with both adversarial attack and poisoning capabilities in an optimized manner. Among them, the process monitoring system defaults to detect the query sample through the statistic of the squared prediction error before and after the subspace reconstruction, and the failure exceeds the control limit. The subspace transfer network consists of three parts: perturbation generator, forward autoencoder and reverse autoencoder. Among them, the perturbation generator is a multi-layer perceptron, and its calculation formula for the perturbation generated by the adversarial sample is expressed as g( ); the forward self-encoder and the reverse self-encoder are two hidden layers smaller than the input and output respectively Dimensionality reduction autoencoder for layers. There are two situations when an attacker attacks:

情况1:攻击者通过挟持传感器,观测并得到工业生产过程中的正常样本

Figure GDA0003107005790000021
其中n是观测得到的正常样本的数量,m是样本的维度。此时子空间迁移网络的目标是产生的样本不受到原有过程监控系统的检测,并加入到更新数据库中,在更新的过程监控系统的建模中使询问样本被误检为故障。此时,更新的过程监控系统受到了数据投毒,无法再被信赖和使用。Scenario 1: The attacker observes and obtains normal samples in the industrial production process by hijacking the sensor
Figure GDA0003107005790000021
where n is the number of observed normal samples and m is the dimension of the sample. At this time, the goal of the subspace migration network is that the generated samples are not detected by the original process monitoring system, and are added to the updated database, so that the query samples are falsely detected as faults in the modeling of the updated process monitoring system. At this point, the updated process monitoring system is poisoned by data and can no longer be trusted and used.

情况2:假设攻击者通过挟持传感器,观测并得到工业生产过程中的正常样本

Figure GDA0003107005790000022
和故障样本
Figure GDA0003107005790000023
其中n是观测得到的正常样本的数量,m是样本的维度,n′是观测得到的故障样本的数量。此时子空间迁移网络的目标是产生的对抗样本不受到原有过程监控系统的检测,并加入到更新数据库中,在更新的过程监控系统的建模中使故障样本被认为是正常样本而漏检。此时,更新的过程监控系统受到了数据投毒,无法再检测出故障。Scenario 2: Suppose the attacker observes and obtains normal samples in the industrial production process by hijacking the sensor
Figure GDA0003107005790000022
and failure samples
Figure GDA0003107005790000023
where n is the number of observed normal samples, m is the dimension of the samples, and n' is the number of observed fault samples. At this time, the goal of the subspace migration network is to generate adversarial samples that are not detected by the original process monitoring system, and are added to the updated database. In the modeling of the updated process monitoring system, the fault samples are considered to be normal samples and are not missed. check. At this point, the updated process monitoring system is subject to data poisoning and can no longer detect failures.

进一步地,所述情况1中,子空间迁移网络的训练过程为:Further, in thecase 1, the training process of the subspace transfer network is:

步骤1.1:根据得到的

Figure GDA0003107005790000024
作为输入,通过以下的损失函数
Figure GDA0003107005790000025
训练得到前向自编码器:Step 1.1: According to the obtained
Figure GDA0003107005790000024
As input, pass the following loss function
Figure GDA0003107005790000025
Training to get the forward autoencoder:

Figure GDA0003107005790000026
Figure GDA0003107005790000026

其中,

Figure GDA0003107005790000027
Figure GDA0003107005790000028
在前向自编码器下的重构样本。前向自编码器的意义在于为扰动生成器构造一个正常数据的第一子空间。in,
Figure GDA0003107005790000027
Yes
Figure GDA0003107005790000028
Reconstructed samples under the forward autoencoder. The significance of the forward autoencoder is to construct a first subspace of normal data for the perturbation generator.

步骤1.2:将得到的

Figure GDA0003107005790000029
作为扰动生成器的输入,并得到增加扰动后的对抗样本:Step 1.2: The resulting
Figure GDA0003107005790000029
As the input of the perturbation generator, and get the adversarial examples after adding perturbation:

Figure GDA00031070057900000210
Figure GDA00031070057900000210

同时,将

Figure GDA00031070057900000211
作为输入,通过以下的损失函数
Figure GDA00031070057900000212
训练得到反向自编码器:At the same time, the
Figure GDA00031070057900000211
As input, pass the following loss function
Figure GDA00031070057900000212
Train the reverse autoencoder:

Figure GDA00031070057900000213
Figure GDA00031070057900000213

其中,

Figure GDA00031070057900000214
是反向自编码器的重构样本。反向自编码器为扰动生成器构造一个投毒数据的第二子空间,使其偏离对抗样本的样本空间。in,
Figure GDA00031070057900000214
is the reconstructed sample of the reverse autoencoder. The reverse autoencoder constructs a second subspace of the poisoned data for the perturbation generator to deviate from the sample space of adversarial examples.

步骤1.3:将

Figure GDA00031070057900000215
作为前向自编码器的测试数据,得到它在前向自编码器下的测试损失
Figure GDA00031070057900000216
Step 1.3: Put the
Figure GDA00031070057900000215
As the test data of the forward autoencoder, get its test loss under the forward autoencoder
Figure GDA00031070057900000216

Figure GDA00031070057900000217
Figure GDA00031070057900000217

它的意义在于在此第一子空间下,使对抗样本将无法被过程监控系统检测出来。Its significance is that in this first subspace, the adversarial samples will not be detected by the process monitoring system.

Figure GDA00031070057900000218
作为反向自编码器的测试数据,得到它在反向自编码器下的测试损失
Figure GDA00031070057900000219
Will
Figure GDA00031070057900000218
As the test data of the reverse autoencoder, get its test loss under the reverse autoencoder
Figure GDA00031070057900000219

Figure GDA0003107005790000031
Figure GDA0003107005790000031

它的意义在于在此第二子空间下,使基于对抗样本所更新的过程监控系统对正常样本产生误检。Its significance is that in this second subspace, the process monitoring system updated based on the adversarial samples will produce false detections for the normal samples.

步骤1.4:扰动生成器将在损失函数LGOP下进行优化训练,使其能够定向地产生满足上述两个子空间的扰动,并得到满足条件的对抗样本。Step 1.4: The perturbation generator will be optimized and trained under the loss function LGOP , so that it can directionally generate perturbations that satisfy the above two subspaces, and obtain adversarial samples that satisfy the conditions.

Figure GDA0003107005790000032
Figure GDA0003107005790000032

其中,α是前向自编码器测试损失的权重因子,β是反向自编码器训练损失的权重因子,γ是反向自编码器测试损失的权重因子。where α is the weight factor for the test loss of the forward autoencoder, β is the weight factor for the training loss of the reverse autoencoder, and γ is the weight factor for the test loss of the reverse autoencoder.

步骤1.5:重复步骤1.3-1.4,直到产生满足要求的对抗样本。Step 1.5: Repeat steps 1.3-1.4 until adversarial examples that meet the requirements are generated.

进一步地,所述步骤1.4中,还可以在LGOP中添加扰动损失Lper限制扰动生成器产生扰动的大小:Further, in the step 1.4, a perturbation lossLper can also be added to the LGOP to limit the size of the perturbation generated by the perturbation generator:

Figure GDA0003107005790000033
Figure GDA0003107005790000033

其中,c是设定扰动的阈值。where c is the threshold for setting the disturbance.

进一步地,所述情况2中,子空间迁移网络的训练过程为:Further, in thecase 2, the training process of the subspace transfer network is:

步骤2.1:根据得到的

Figure GDA0003107005790000034
作为输入,通过以下的损失函数
Figure GDA0003107005790000035
训练得到前向自编码器:Step 2.1: According to the obtained
Figure GDA0003107005790000034
As input, pass the following loss function
Figure GDA0003107005790000035
Training to get the forward autoencoder:

Figure GDA0003107005790000036
Figure GDA0003107005790000036

其中,

Figure GDA0003107005790000037
Figure GDA0003107005790000038
在前向自编码器下的重构样本。前向自编码器的意义在于为扰动生成器构造一个正常数据的第一子空间。in,
Figure GDA0003107005790000037
Yes
Figure GDA0003107005790000038
Reconstructed samples under the forward autoencoder. The significance of the forward autoencoder is to construct a first subspace of normal data for the perturbation generator.

步骤2.2:将得到的

Figure GDA0003107005790000039
作为扰动生成器的输入,并得到增加扰动后的对抗样本:Step 2.2: The resulting
Figure GDA0003107005790000039
As the input of the perturbation generator, and get the adversarial examples after adding perturbation:

Figure GDA00031070057900000310
Figure GDA00031070057900000310

同时,将

Figure GDA00031070057900000311
作为输入,通过以下的损失函数
Figure GDA00031070057900000312
训练得到反向自编码器:At the same time, the
Figure GDA00031070057900000311
As input, pass the following loss function
Figure GDA00031070057900000312
Train the reverse autoencoder:

Figure GDA00031070057900000313
Figure GDA00031070057900000313

其中,

Figure GDA00031070057900000314
是反向自编码器的重构样本。反向自编码器为扰动生成器构造一个投毒数据的第二子空间,使其进入对抗样本的样本空间。in,
Figure GDA00031070057900000314
is the reconstructed sample of the reverse autoencoder. The reverse autoencoder constructs a second subspace of poisoned data for the perturbation generator to enter the sample space of adversarial examples.

步骤2.3:将

Figure GDA00031070057900000315
作为前向自编码器的测试数据,得到它在前向自编码器下的测试损失
Figure GDA0003107005790000041
Step 2.3: Put the
Figure GDA00031070057900000315
As the test data of the forward autoencoder, get its test loss under the forward autoencoder
Figure GDA0003107005790000041

Figure GDA0003107005790000042
Figure GDA0003107005790000042

它的意义在于在此第一子空间下,使对抗样本将无法被过程监控系统检测出来。Its significance is that in this first subspace, the adversarial samples will not be detected by the process monitoring system.

Figure GDA0003107005790000043
作为反向自编码器的测试数据,得到它在反向自编码器下的测试损失
Figure GDA0003107005790000044
Will
Figure GDA0003107005790000043
As the test data of the reverse autoencoder, get its test loss under the reverse autoencoder
Figure GDA0003107005790000044

Figure GDA0003107005790000045
Figure GDA0003107005790000045

它的意义在于在此第二子空间下,使基于对抗样本所更新的过程监控系统无法检测出故障。Its significance is that in this second subspace, the process monitoring system updated based on adversarial samples cannot detect faults.

步骤2.4:扰动生成器将在损失函数LGOP下进行优化训练,使其能够定向地产生满足上述两个子空间的扰动,并得到满足条件的对抗样本。Step 2.4: The perturbation generator will be optimized and trained under the loss function LGOP , so that it can directionally generate perturbations that satisfy the above two subspaces, and obtain adversarial samples that satisfy the conditions.

Figure GDA0003107005790000046
Figure GDA0003107005790000046

其中,α是前向自编码器测试损失的权重因子,β是反向自编码器训练损失的权重因子,γ是反向自编码器测试损失的权重因子。where α is the weight factor for the test loss of the forward autoencoder, β is the weight factor for the training loss of the reverse autoencoder, and γ is the weight factor for the test loss of the reverse autoencoder.

步骤2.5:重复步骤2.3-2.4,直到产生满足要求的对抗样本。Step 2.5: Repeat steps 2.3-2.4 until adversarial examples that meet the requirements are generated.

进一步地,所述步骤2.4中,还可以在LGOP中添加扰动损失Lper限制扰动生成器产生扰动的大小:Further, in the step 2.4, a perturbation lossLper can also be added to the LGOP to limit the size of the perturbation generated by the perturbation generator:

Figure GDA0003107005790000047
Figure GDA0003107005790000047

其中,c是设定扰动的阈值。where c is the threshold for setting the disturbance.

本发明的有益效果是:本发明通过训练子空间迁移网络来利用工业数据制作对抗样本。子空间网络由扰动生成器,前向自编码器和反向自编码器组成。扰动生成器为原始样本增加额外的扰动,前向自编码器和反向自编码器通过子空间的信息来为扰动提供方向。攻击者可针对观测数据中的数据情况,分别分为情况1和情况2,对过程监控系统进行对抗攻击和数据投毒。本发明针对过程监测模型,提供了一种基于优化方法进行训练的子空间迁移网络,能够产生兼具对抗攻击能力和投毒能力的对抗样本,并对过程监测模型进行对抗攻击。The beneficial effects of the present invention are: the present invention makes use of industrial data to produce adversarial samples by training the subspace transfer network. The subspace network consists of a perturbation generator, a forward autoencoder and a reverse autoencoder. The perturbation generator adds additional perturbations to the original samples, and the forward and reverse autoencoders provide directions for the perturbations through the information of the subspace. Attackers can be divided intocase 1 andcase 2 according to the data conditions in the observation data, and conduct adversarial attacks and data poisoning on the process monitoring system. Aiming at the process monitoring model, the invention provides a subspace migration network trained based on the optimization method, which can generate confrontation samples with both the ability to confront attack and poisoning, and conduct confrontation attacks on the process monitoring model.

附图说明Description of drawings

图1是子空间迁移网络的结构示意图;1 is a schematic structural diagram of a subspace migration network;

图2是Tennessee Eastman过程流程图;Fig. 2 is Tennessee Eastman process flow chart;

图3是情况1下正常数据及其对抗样本在PCA过程监控系统下SPE统计值示意图;Figure 3 is a schematic diagram of the SPE statistics of normal data and its adversarial samples under the PCA process monitoring system incase 1;

图4是情况1下询问样本在正常数据及其对抗样本在投毒后更新的PCA过程监控系统下SPE统计值示意图;Figure 4 is a schematic diagram of the SPE statistical value of the query sample under the PCA process monitoring system in which the normal data and the countermeasure sample are updated after poisoning incase 1;

图5是情况2下故障数据及其对抗样本在PCA过程监控系统下SPE统计值示意图;Figure 5 is a schematic diagram of the SPE statistical value of the fault data and its adversarial samples under the PCA process monitoring system incase 2;

图6是情况2下询问样本在正常数据及故障数据的对抗样本在投毒后更新的PCA过程监控系统下SPE统计值示意图。FIG. 6 is a schematic diagram of the SPE statistic value of the query sample under the PCA process monitoring system in which the normal data and the adversarial sample of the fault data are updated after poisoning incase 2.

具体实施方式Detailed ways

下面结合具体实施方式对本发明所提出的过程监控系统的对抗攻击方法作进一步的详述。The counterattack method of the process monitoring system proposed by the present invention will be further described in detail below with reference to specific embodiments.

本发明一种针对过程监控系统的对抗攻击方法采用子空间迁移网络(SpatialTransformer Networks,STN),包括扰动生成器(Generator of Perturbation,GOP)、前向自编码器(Forward Autoencoder,FAE)和反向自编码器(Reverse Autoencoder,RAE)三个部分。其中,扰动生成器是一个多层感知机,它为对抗样本所生成扰动的计算公式表示为g(·);前向自编码器和反向自编码器分别是两个隐含层小于输入输出层的降维自编码器。STN通过优化的方式生成同时具备对抗攻击和投毒能力的对抗样本。An adversarial attack method for a process monitoring system of the present invention adopts a subspace transfer network (Spatial Transformer Networks, STN), including a generator of perturbation (Generator of Perturbation, GOP), a forward autoencoder (Forward Autoencoder, FAE) and a reverse Autoencoder (Reverse Autoencoder, RAE) has three parts. Among them, the perturbation generator is a multi-layer perceptron, and its calculation formula for the perturbation generated by the adversarial sample is expressed as g( ); the forward self-encoder and the reverse self-encoder are two hidden layers smaller than the input and output respectively Dimensionality reduction autoencoder for layers. STN generates adversarial samples with both adversarial attack and poisoning capabilities in an optimized way.

本发明中所提及的过程监控系统默认通过子空间重构前后的平方预测误差(Squared prediction error,SPE)的统计量来对询问样本进行检测,SPE超过控制限的即为故障。The process monitoring system mentioned in the present invention uses the statistic of squared prediction error (SPE) before and after subspace reconstruction to detect the query sample by default. If the SPE exceeds the control limit, it is a fault.

情况1:Case 1:

假设攻击者通过挟持传感器,观测并得到工业生产过程中的正常样本

Figure GDA0003107005790000051
其中n是观测得到的正常样本的数量,m是样本的维度,
Figure GDA0003107005790000052
是实数。此时STN的目标是产生的对抗样本不受到原有过程监控系统的检测,并加入到更新数据库中,在更新的过程监控系统的建模中使询问样本被误检为故障。此时,更新的过程监控系统受到了数据投毒,无法再被信赖和使用。以下是STN的训练过程:Suppose the attacker observes and obtains normal samples in the industrial production process by hijacking the sensor
Figure GDA0003107005790000051
where n is the number of observed normal samples, m is the dimension of the sample,
Figure GDA0003107005790000052
is a real number. At this time, the goal of STN is that the generated adversarial samples are not detected by the original process monitoring system, and are added to the updated database, so that the query samples are falsely detected as faults in the modeling of the updated process monitoring system. At this point, the updated process monitoring system is poisoned by data and can no longer be trusted and used. The following is the training process of STN:

步骤1:根据得到的正常样本

Figure GDA0003107005790000053
作为输入,通过以下损失函数
Figure GDA0003107005790000054
训练得到前向自编码器FAE:Step 1: According to the obtained normal sample
Figure GDA0003107005790000053
As input, pass the following loss function
Figure GDA0003107005790000054
Training to get the forward autoencoder FAE:

Figure GDA0003107005790000055
Figure GDA0003107005790000055

其中,

Figure GDA0003107005790000056
Figure GDA0003107005790000057
在FAE下的重构样本。FAE的意义在于为GOP构造一个正常数据的第一子空间。in,
Figure GDA0003107005790000056
Yes
Figure GDA0003107005790000057
Reconstructed samples under FAE. The significance of FAE is to construct a first subspace of normal data for GOP.

步骤2:将得到的正常样本

Figure GDA0003107005790000058
作为GOP的输入,得到增加扰动g(·)后的对抗样本
Figure GDA0003107005790000059
Step 2: The normal sample that will be obtained
Figure GDA0003107005790000058
As the input of GOP, the adversarial samples after adding perturbation g( ) are obtained
Figure GDA0003107005790000059

Figure GDA00031070057900000510
Figure GDA00031070057900000510

同时,将对抗样本

Figure GDA0003107005790000061
作为输入,通过以下损失函数
Figure GDA0003107005790000062
训练得到反向自编码器RAE:At the same time, adversarial examples will be
Figure GDA0003107005790000061
As input, pass the following loss function
Figure GDA0003107005790000062
Training to get the reverse autoencoder RAE:

Figure GDA0003107005790000063
Figure GDA0003107005790000063

其中,

Figure GDA0003107005790000064
是RAE的重构样本。RAE为GOP构造一个投毒数据的第二子空间,使其偏离对抗样本的样本空间。in,
Figure GDA0003107005790000064
is the reconstructed sample of RAE. RAE constructs a second subspace of poisoned data for the GOP to deviate from the sample space of adversarial examples.

步骤3:将对抗样本

Figure GDA0003107005790000065
作为FAE的测试数据,得到它在FAE下的测试损失
Figure GDA0003107005790000066
Step 3: Put Adversarial Examples
Figure GDA0003107005790000065
As the test data of FAE, get its test loss under FAE
Figure GDA0003107005790000066

Figure GDA0003107005790000067
Figure GDA0003107005790000067

Figure GDA0003107005790000068
的优化目的在于在第一子空间中,使对抗样本无法被过程监控系统检测出来。
Figure GDA0003107005790000068
The purpose of optimization is to make adversarial samples undetectable by the process monitoring system in the first subspace.

将正常样本

Figure GDA0003107005790000069
作为RAE的测试数据,得到它在RAE下的测试损失
Figure GDA00031070057900000610
normal sample
Figure GDA0003107005790000069
As the test data of RAE, get its test loss under RAE
Figure GDA00031070057900000610

Figure GDA00031070057900000611
Figure GDA00031070057900000611

Figure GDA00031070057900000612
的优化目的在于在第二子空间中,使基于对抗样本更新的过程监控系统对正常样本产生误检。
Figure GDA00031070057900000612
The purpose of optimization is to make the process monitoring system based on adversarial sample update falsely detect normal samples in the second subspace.

步骤4:将GOP在损失函数LGOP下进行优化训练,使其能够定向地产生满足两个子空间的扰动,并得到满足条件的对抗样本:Step 4: The GOP is optimized and trained under the loss function LGOP , so that it can directionally generate perturbations that satisfy the two subspaces, and obtain adversarial samples that satisfy the conditions:

Figure GDA00031070057900000613
Figure GDA00031070057900000613

其中,α>0是FAE测试损失

Figure GDA00031070057900000614
的权重因子,β>0是RAE训练损失
Figure GDA00031070057900000615
的权重因子,γ<0是RAE测试损失
Figure GDA00031070057900000616
的权重因子。where α>0 is the FAE test loss
Figure GDA00031070057900000614
The weight factor of β>0 is the RAE training loss
Figure GDA00031070057900000615
The weight factor of γ < 0 is the RAE test loss
Figure GDA00031070057900000616
weight factor.

作为优选,也可以在LGOP中添加扰动损失Lper限制GOP产生扰动的大小:As an option, a perturbation lossLper can also be added to LGOP to limit the size of the disturbance generated by GOP:

Figure GDA00031070057900000617
Figure GDA00031070057900000617

其中,c是设定扰动的阈值;LGOP′=LGOP+LperAmong them, c is the threshold value of the set disturbance; LGOP ′=LGOP +Lper .

步骤5:重复步骤3-4,直到产生满足要求的对抗样本。Step 5: Repeat steps 3-4 until adversarial examples that meet the requirements are generated.

情况2:Case 2:

假设攻击者通过挟持传感器,观测并得到工业生产过程中的正常样本

Figure GDA0003107005790000071
和故障样本
Figure GDA0003107005790000072
其中n是观测得到的正常样本的数量,m是样本的维度,n′是观测得到的故障样本的数量。此时STN的目标是产生的对抗样本不受到原有过程监控系统的检测,并加入到更新数据库中,在更新的过程监控系统的建模中使故障样本被认为是正常样本而漏检。此时,更新的过程监控系统受到了数据投毒,无法再检测出故障。以下是STN的训练过程:Suppose the attacker observes and obtains normal samples in the industrial production process by hijacking the sensor
Figure GDA0003107005790000071
and failure samples
Figure GDA0003107005790000072
where n is the number of observed normal samples, m is the dimension of the samples, and n' is the number of observed fault samples. At this time, the goal of STN is that the generated adversarial samples are not detected by the original process monitoring system, and are added to the updated database. In the modeling of the updated process monitoring system, the fault samples are regarded as normal samples and are missed. At this point, the updated process monitoring system is subject to data poisoning and can no longer detect failures. The following is the training process of STN:

步骤1:根据得到的正常样本

Figure GDA0003107005790000073
作为输入,通过以下损失函数
Figure GDA0003107005790000074
训练得到前向自编码器FAE:Step 1: According to the obtained normal sample
Figure GDA0003107005790000073
As input, pass the following loss function
Figure GDA0003107005790000074
Training to get the forward autoencoder FAE:

Figure GDA0003107005790000075
Figure GDA0003107005790000075

其中,

Figure GDA0003107005790000076
Figure GDA0003107005790000077
在FAE下的重构样本。FAE的意义在于为GOP构造一个正常数据的第一子空间。in,
Figure GDA0003107005790000076
Yes
Figure GDA0003107005790000077
Reconstructed samples under FAE. The significance of FAE is to construct a first subspace of normal data for GOP.

步骤2:将得到的故障样本

Figure GDA0003107005790000078
作为GOP的输入,得到增加扰动后的对抗样本:Step 2: The resulting failure samples
Figure GDA0003107005790000078
As the input of the GOP, the adversarial examples after adding perturbation are obtained:

Figure GDA0003107005790000079
Figure GDA0003107005790000079

同时,将对抗样本

Figure GDA00031070057900000710
作为输入,通过以下损失函数
Figure GDA00031070057900000711
训练得到反向自编码器RAE:At the same time, adversarial examples will be
Figure GDA00031070057900000710
As input, pass the following loss function
Figure GDA00031070057900000711
Training to get the reverse autoencoder RAE:

Figure GDA00031070057900000712
Figure GDA00031070057900000712

其中,

Figure GDA00031070057900000713
是RAE的重构样本。RAE为GOP构造一个投毒数据的第二子空间,使其进入对抗样本的样本空间。in,
Figure GDA00031070057900000713
is the reconstructed sample of RAE. RAE constructs a second subspace of poisoned data for the GOP to enter the sample space of adversarial samples.

步骤3:将对抗样本

Figure GDA00031070057900000714
作为FAE的测试数据,得到它在FAE下的测试损失
Figure GDA00031070057900000715
Step 3: Put Adversarial Examples
Figure GDA00031070057900000714
As the test data of FAE, get its test loss under FAE
Figure GDA00031070057900000715

Figure GDA00031070057900000716
Figure GDA00031070057900000716

Figure GDA00031070057900000717
的优化目的在于在第一子空间中,使对抗样本将无法被过程监控系统检测出来。
Figure GDA00031070057900000717
The purpose of optimization is to make adversarial samples undetectable by the process monitoring system in the first subspace.

将正常样本

Figure GDA00031070057900000718
作为RAE的测试数据,得到它在RAE下的测试损失
Figure GDA00031070057900000719
normal sample
Figure GDA00031070057900000718
As the test data of RAE, get its test loss under RAE
Figure GDA00031070057900000719

Figure GDA00031070057900000720
Figure GDA00031070057900000720

Figure GDA0003107005790000081
的优化目的在于在第二子空间中,使基于对抗样本更新的过程监控系统无法检测出故障。
Figure GDA0003107005790000081
The purpose of optimization is to make the process monitoring system based on adversarial example update unable to detect faults in the second subspace.

步骤4:将GOP在损失函数LGOP下进行优化训练,使其能够定向地产生满足两个子空间的扰动,并得到满足条件的对抗样本:Step 4: The GOP is optimized and trained under the loss function LGOP , so that it can directionally generate perturbations that satisfy the two subspaces, and obtain adversarial samples that satisfy the conditions:

Figure GDA0003107005790000082
Figure GDA0003107005790000082

其中,α>0是FAE测试损失的权重因子,β>0是RAE训练损失的权重因子,γ>0是RAE测试损失的权重因子。作为优选,也可以在LGOP中添加扰动损失Lper限制GOP产生扰动的大小:Among them, α>0 is the weighting factor of FAE test loss, β>0 is the weighting factor of RAE training loss, and γ>0 is the weighting factor of RAE testing loss. As an option, a perturbation lossLper can also be added to LGOP to limit the size of the disturbance generated by GOP:

Figure GDA0003107005790000083
Figure GDA0003107005790000083

其中,c是设定扰动的阈值。where c is the threshold for setting the disturbance.

步骤5:重复步骤3-4,直到产生满足要求的对抗样本。Step 5: Repeat steps 3-4 until adversarial examples that meet the requirements are generated.

以下结合一个具体的TE(Tennessee Eastman)过程的例子来说明。TE过程是故障诊断与故障分类领域常用的标准数据集,整个数据集包括53个过程变量,其工艺流程如图2所示。该流程由气液分离塔,连续搅拌式反应釜,分凝器,离心式压缩机,再沸器等5个操作单元组成,该过程可以由多个代数和微分方程来表示,非线性和强耦合性是该过程传感数据的主要特点。The following is combined with an example of a specific TE (Tennessee Eastman) process to illustrate. The TE process is a commonly used standard data set in the field of fault diagnosis and fault classification. The entire data set includes 53 process variables, and its process flow is shown in Figure 2. The process consists of 5 operation units, such as gas-liquid separation tower, continuous stirring reactor, partial condenser, centrifugal compressor, reboiler, etc. The process can be represented by multiple algebraic and differential equations, nonlinear and strong Coupling is the main characteristic of sensory data in this process.

TE过程可人为设置21类故障,在这21类故障中,包括16类已知故障,5类未知故障,故障的种类包括流量的阶跃变化、缓慢斜坡增大、阀门的粘滞等等,包含典型的非线性故障和动态性故障等。针对该过程,将所有53个过程变量作为建模变量,选取故障14(反应器冷却水阀门黏住)作为本申请两种情况的对抗攻击及数据投毒实验。将分为情况1(500个正常样本建立PCA过程监控系统,250个正常数据作为观测正常样本建立STN模型,250个正常样本制作的对抗样本建立更新的PCA过程监控系统,300个正常样本作为询问样本)和情况2(500个正常样本建立PCA过程监控系统,250个正常数据和250个故障14样本作为观测正常样本建立STN模型,250个故障14样本制作的对抗样本建立更新的PCA过程监控系统,160个正常样本和300个故障14样本作为询问样本)。21 types of faults can be manually set in the TE process. Among these 21 types of faults, there are 16 types of known faults and 5 types of unknown faults. The types of faults include step change in flow, slow ramp increase, valve sticking, etc. Including typical nonlinear faults and dynamic faults, etc. For this process, all 53 process variables are used as modeling variables, and fault 14 (the reactor cooling water valve sticking) is selected as the adversarial attack and data poisoning experiments in the two cases of the present application. It will be divided into case 1 (500 normal samples to establish PCA process monitoring system, 250 normal data as observed normal samples to establish STN model, 250 normal samples to create adversarial samples to establish updated PCA process monitoring system, 300 normal samples as query Samples) and Case 2 (500 normal samples to establish PCA process monitoring system, 250 normal data and 250 fault 14 samples as observed normal samples to establish STN model, 250 fault 14 samples to make adversarial samples to establish updated PCA process monitoring system , 160 normal samples and 300 faulty 14 samples as query samples).

从图3中可以看出,情况1中,正常数据及其对抗样本的SPE统计值在过程监控系统的控制限之下,这意味着过程检测系统无法检测出对抗样本。图4分别展示了正常样本和对抗样本投毒后的更新过程监控系统的结果,可以看出正常的询问样本在未投毒的过程监控系统中被认为是正常的;而在投毒后的更新过程监控系统中却被认为是故障的。这说明了在情况1中,STN所产生对抗样本兼具对抗攻击和数据投毒的能力。As can be seen from Figure 3, incase 1, the SPE statistics of the normal data and its adversarial samples are below the control limit of the process monitoring system, which means that the process detection system cannot detect the adversarial samples. Figure 4 shows the results of the update process monitoring system after the poisoning of normal samples and anti-samples respectively. It can be seen that the normal query samples are considered normal in the process monitoring system without poisoning; The process monitoring system is considered to be faulty. This shows that incase 1, the adversarial samples generated by STN have both adversarial attack and data poisoning capabilities.

从图5中可以看出,在情况2下,故障14样本的SPE统计值远超过程监控系统的控制限,说明过程监控系统能够轻易地检测出故障14;而故障14样本的对抗样本的SPE统计值都在控制限之下,说明过程监控系统无法有效地检测出对抗样本。图6分别展示了故障14样本和对抗样本在情况2中的更新过程监控系统的结果,可以看出经过正常样本所建立的更新过程监控系统能够很好地检测出故障14询问样本;而在由对抗样本所建立的更新过程监控系统下,正常样本和大部分故障14询问样本的SPE统计值在控制下之下。这说明了在情况2中,STN所产生对抗样本兼具对抗攻击和数据投毒的能力。As can be seen from Figure 5, incase 2, the SPE statistic value of the fault 14 sample far exceeds the control limit of the process monitoring system, indicating that the process monitoring system can easily detect fault 14; and the SPE of the fault 14 sample against the sample The statistical values are all below the control limits, indicating that the process monitoring system cannot effectively detect adversarial samples. Figure 6 shows the results of the update process monitoring system of the fault 14 sample and the adversarial sample incase 2, respectively. It can be seen that the update process monitoring system established by the normal sample can well detect the fault 14 query sample; Under the update process monitoring system established against the samples, the SPE statistics of the normal samples and most of the faulty 14 query samples are under control. This shows that incase 2, the adversarial samples generated by STN have both adversarial attack and data poisoning capabilities.

Claims (5)

1. An anti-attack method for a process monitoring system is characterized in that a subspace migration network generates an anti-sample with anti-attack and virus throwing capabilities simultaneously in an optimized mode; the process monitoring system detects the inquiry sample by default through the statistic of square prediction errors before and after subspace reconstruction, and if the square prediction error exceeds the control limit, the process monitoring system detects the inquiry sample as a fault; the subspace migration network comprises three parts, respectively: a disturbance generator, a forward autoencoder and a reverse autoencoder; wherein, the disturbance generator is a multilayer perceptron, and the calculation formula of the disturbance generated by the countermeasure sample is represented as g (·); the forward self-encoder and the reverse self-encoder are respectively two dimension-reduction self-encoders with hidden layers smaller than the input and output layers; the attacker attacks the following two cases:
case 1: an attacker observes and obtains a normal sample in the industrial production process by holding the sensor
Figure FDA0003107005780000011
Where n is the number of observed normal samples and m is the dimension of the sample; at the moment, the goal of the subspace migration network is that the generated sample is not detected by the original process monitoring system and is added into the updated database, and the query sample is mistakenly detected as a fault in the modeling of the updated process monitoring system; this is achieved byIn time, the updated process monitoring system is poisoned by data and cannot be relied and used any more;
case 2: supposing that an attacker observes and obtains a normal sample in the industrial production process by holding the sensor
Figure FDA0003107005780000012
And fault samples
Figure FDA0003107005780000013
Wherein n is the number of observed normal samples, m is the dimensionality of the samples, and n' is the number of observed fault samples; at the moment, the goal of the subspace migration network is that the generated countermeasure sample is not detected by the original process monitoring system and is added into the updated database, and the fault sample is considered as a normal sample and is missed in the modeling of the updated process monitoring system; at this time, the updated process monitoring system is subject to data poisoning and can no longer detect a fault.
2. The method for protecting against attacks on a process monitoring system according to claim 1, wherein in case 1, the training process of the subspace migration network is:
step 1.1: according to obtaining
Figure FDA0003107005780000014
As input, pass the following loss function
Figure FDA0003107005780000015
Training results in a forward autoencoder:
Figure FDA0003107005780000016
wherein,
Figure FDA0003107005780000017
is that
Figure FDA0003107005780000018
Reconstructing samples under a forward autoencoder; the significance of the forward self-encoder is to construct a first subspace of normal data for the disturbance generator;
step 1.2: will obtain
Figure FDA0003107005780000019
As input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
Figure FDA00031070057800000110
at the same time, will
Figure FDA00031070057800000111
As input, pass the following loss function
Figure FDA00031070057800000112
Training results in an inverse autoencoder:
Figure FDA0003107005780000021
wherein,
Figure FDA0003107005780000022
is a reconstructed sample of the inverse auto-encoder; the reverse self-encoder constructs a second subspace of the virus-throwing data for the disturbance generator, and the second subspace deviates from the sample space of the countermeasure sample;
step 1.3: will be provided with
Figure FDA0003107005780000023
Obtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
Figure FDA0003107005780000024
Figure FDA0003107005780000025
The significance of the method is that in the first subspace, the confrontation sample cannot be detected by a process monitoring system;
will be provided with
Figure FDA0003107005780000026
As test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
Figure FDA0003107005780000027
Figure FDA0003107005780000028
The significance of the method is that under the second subspace, the process monitoring system updated based on the confrontation samples generates false detection on the normal samples;
step 1.4: the perturbation generator will be at the loss function LGOPPerforming optimization training to enable the optimization training to directionally generate disturbance meeting the two subspaces and obtain a confrontation sample meeting the condition;
Figure FDA0003107005780000029
wherein, alpha is a weighting factor of the test loss of the forward self-encoder, beta is a weighting factor of the training loss of the reverse self-encoder, and gamma is a weighting factor of the test loss of the reverse self-encoder;
step 1.5: repeating steps 1.3-1.4 until challenge samples meeting the requirements are produced.
3. The method for protecting against attacks on a process monitoring system according to claim 2, wherein said step 1.4 is further performed at LGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
Figure FDA00031070057800000210
where c is a threshold for setting the disturbance.
4. The method for protecting against attacks on a process monitoring system according to claim 1, wherein in case 2, the training process of the subspace migration network is:
step 2.1: according to obtaining
Figure FDA00031070057800000211
As input, pass the following loss function
Figure FDA00031070057800000212
Training results in a forward autoencoder:
Figure FDA00031070057800000213
wherein,
Figure FDA0003107005780000031
is that
Figure FDA0003107005780000032
Reconstructing samples under a forward autoencoder; the significance of the forward self-encoder is to construct a first subspace of normal data for the disturbance generator;
step 2.2: will obtain
Figure FDA0003107005780000033
As input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
Figure FDA0003107005780000034
at the same time, will
Figure FDA0003107005780000035
As input, pass the following loss function
Figure FDA0003107005780000036
Training results in an inverse autoencoder:
Figure FDA0003107005780000037
wherein,
Figure FDA0003107005780000038
is a reconstructed sample of the inverse auto-encoder; constructing a second subspace of the virus-throwing data for the disturbance generator by the reverse self-encoder, and enabling the second subspace to enter a sample space of the countermeasure sample;
step 2.3: will be provided with
Figure FDA0003107005780000039
Obtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
Figure FDA00031070057800000310
Figure FDA00031070057800000311
The significance of the method is that in the first subspace, the confrontation sample cannot be detected by a process monitoring system;
will be provided with
Figure FDA00031070057800000312
As test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
Figure FDA00031070057800000313
Figure FDA00031070057800000316
The significance of the method is that under the second subspace, the process monitoring system updated based on the countermeasure sample cannot detect faults;
step 2.4: the perturbation generator will be at the loss function LGOPPerforming optimization training to enable the optimization training to directionally generate disturbance meeting the two subspaces and obtain a confrontation sample meeting the condition;
Figure FDA00031070057800000314
wherein, alpha is a weighting factor of the test loss of the forward self-encoder, beta is a weighting factor of the training loss of the reverse self-encoder, and gamma is a weighting factor of the test loss of the reverse self-encoder;
step 2.5: repeating steps 2.3-2.4 until a challenge sample meeting the requirements is produced.
5. The method of claim 4 for countering attacks on process monitoring systems, wherein in step 2.4, the action can also be performed at LGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
Figure FDA00031070057800000315
where c is a threshold for setting the disturbance.
CN202011080541.1A2020-10-102020-10-10 An Adversarial Attack Method for Process Monitoring SystemActiveCN112162515B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011080541.1ACN112162515B (en)2020-10-102020-10-10 An Adversarial Attack Method for Process Monitoring System

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011080541.1ACN112162515B (en)2020-10-102020-10-10 An Adversarial Attack Method for Process Monitoring System

Publications (2)

Publication NumberPublication Date
CN112162515A CN112162515A (en)2021-01-01
CN112162515Btrue CN112162515B (en)2021-08-03

Family

ID=73868016

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011080541.1AActiveCN112162515B (en)2020-10-102020-10-10 An Adversarial Attack Method for Process Monitoring System

Country Status (1)

CountryLink
CN (1)CN112162515B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113361648B (en)*2021-07-072022-07-05浙江大学Information fingerprint extraction method for safe industrial big data analysis
CN118313416B (en)*2024-06-112024-09-06中国人民解放军国防科技大学Attack method and device for countering cooperative countering of sample attack and back door attack

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2019075771A1 (en)*2017-10-202019-04-25Huawei Technologies Co., Ltd.Self-training method and system for semi-supervised learning with generative adversarial networks
CN110334806A (en)*2019-05-292019-10-15广东技术师范大学 A method of adversarial sample generation based on generative adversarial network
CN110598400A (en)*2019-08-292019-12-20浙江工业大学Defense method for high hidden poisoning attack based on generation countermeasure network and application
WO2020057867A1 (en)*2018-09-172020-03-26Robert Bosch GmbhDevice and method for training an augmented discriminator
CN111353548A (en)*2020-03-112020-06-30中国人民解放军军事科学院国防科技创新研究院Robust feature deep learning method based on confrontation space transformation network
WO2020143227A1 (en)*2019-01-072020-07-16浙江大学Method for generating malicious sample of industrial control system based on adversarial learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108446765A (en)*2018-02-112018-08-24浙江工业大学The multi-model composite defense method of sexual assault is fought towards deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2019075771A1 (en)*2017-10-202019-04-25Huawei Technologies Co., Ltd.Self-training method and system for semi-supervised learning with generative adversarial networks
WO2020057867A1 (en)*2018-09-172020-03-26Robert Bosch GmbhDevice and method for training an augmented discriminator
WO2020143227A1 (en)*2019-01-072020-07-16浙江大学Method for generating malicious sample of industrial control system based on adversarial learning
CN110334806A (en)*2019-05-292019-10-15广东技术师范大学 A method of adversarial sample generation based on generative adversarial network
CN110598400A (en)*2019-08-292019-12-20浙江工业大学Defense method for high hidden poisoning attack based on generation countermeasure network and application
CN111353548A (en)*2020-03-112020-06-30中国人民解放军军事科学院国防科技创新研究院Robust feature deep learning method based on confrontation space transformation network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于自动编码器的统计过程监控方法研究;郭朋举;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200215(第02期);第55-71页*
对抗样本生成技术综述;潘文雯 等;《软件学报》;20200131;第31卷(第01期);第67-81页*
面向低维工控网数据集的对抗样本攻击分析;周文 等;《计算机研究与发展》;20200413(第04期);第70-79页*

Also Published As

Publication numberPublication date
CN112162515A (en)2021-01-01

Similar Documents

PublicationPublication DateTitle
US12437063B2 (en)Unified multi-agent system for abnormality detection and isolation
US10841322B2 (en)Decision system and method for separating faults from attacks
US11252169B2 (en)Intelligent data augmentation for supervised anomaly detection associated with a cyber-physical system
US11503045B2 (en)Scalable hierarchical abnormality localization in cyber-physical systems
Yang et al.Anomaly-based intrusion detection for SCADA systems
US9497204B2 (en)In-situ trainable intrusion detection system
Chang et al.Anomaly detection for industrial control systems using k-means and convolutional autoencoder
CN112162515B (en) An Adversarial Attack Method for Process Monitoring System
CN104486141A (en)Misdeclaration self-adapting network safety situation predication method
CN116304959B (en)Method and system for defending against sample attack for industrial control system
CN113281998A (en)Multi-point FDI attack detection method for industrial information physical system based on generation countermeasure network
CN115596654B (en)Reciprocating compressor fault diagnosis method and system based on state parameter learning
Guo et al.Fault Detection of Reciprocating Compressor Valve Based on One‐Dimensional Convolutional Neural Network
Zhang et al.A new deep convolutional domain adaptation network for bearing fault diagnosis under different working conditions
CN111784404B (en) A method for identifying abnormal assets based on behavioral variable prediction
CN117294515A (en)Industrial control network protocol fuzzy test method based on generation of countermeasure network
Jiang et al.Attacks on data-driven process monitoring systems: Subspace transfer networks
Yang et al.A Fault Identification Method for Electric Submersible Pumps Based on DAE‐SVM
Luktarhan et al.Multi-stage attack detection algorithm based on hidden markov model
Kumaran et al.AI based Pentest For EHR and Other Health Monitoring Devices
CN118677669B (en) A spatiotemporal-based intrusion detection method for autonomous vehicles
CN113162904B (en) A network security alarm evaluation method for power monitoring system based on probabilistic graph model
Cui et al.An Improved Support Vector Machine Attack Detection Algorithm for Industry Controls System
CN109495437B (en) A network anomaly detection method in industrial control system using online machine learning algorithm
CN117675277A (en)Industrial control protocol fuzzy test method based on probability mutation

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp