Movatterモバイル変換


[0]ホーム

URL:


CN119090053A - Factory fire safety hazard prediction method and system based on MoE large model - Google Patents

Factory fire safety hazard prediction method and system based on MoE large model
Download PDF

Info

Publication number
CN119090053A
CN119090053ACN202411089656.5ACN202411089656ACN119090053ACN 119090053 ACN119090053 ACN 119090053ACN 202411089656 ACN202411089656 ACN 202411089656ACN 119090053 ACN119090053 ACN 119090053A
Authority
CN
China
Prior art keywords
model
moe
features
layer
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411089656.5A
Other languages
Chinese (zh)
Inventor
孙浩
徐昆
李启凯
刘浩瑞
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Smart Building Technology Co ltd
Original Assignee
Shandong Inspur Smart Building Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Smart Building Technology Co ltdfiledCriticalShandong Inspur Smart Building Technology Co ltd
Priority to CN202411089656.5ApriorityCriticalpatent/CN119090053A/en
Publication of CN119090053ApublicationCriticalpatent/CN119090053A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明适用于安全预警技术领域,提供了基于MoE大模型的工厂火灾安全隐患预测方法及系统,包括以下步骤:采集多模态数据,多模态数据包括环境监测数据和设备运行日志,设备运行日志记录了设备的运行效率、历史维护信息和故障异常指标;对多模态数据进行提取得到深度特征,通过注意力网络模型进行特征融合,输出深度融合特征;搭建包含普通前馈神经网络子模型和时间卷积网络子模型的MoE模型,基于门控网络动态选择和调整每个普通前馈神经网络子模型和时间卷积网络子模型的权重,将深度融合特征输入至MoE模型,生成预测结果。本发明结合了MoE模型的多样化处理能力和TCN的长时间序列分析优势,提高了预测的准确性和响应速度。

The present invention is applicable to the field of safety early warning technology, and provides a method and system for predicting factory fire safety hazards based on the MoE large model, including the following steps: collecting multimodal data, the multimodal data includes environmental monitoring data and equipment operation logs, and the equipment operation logs record the equipment's operating efficiency, historical maintenance information, and fault anomaly indicators; extracting multimodal data to obtain deep features, performing feature fusion through an attention network model, and outputting deep fusion features; building a MoE model including a common feedforward neural network submodel and a temporal convolutional network submodel, dynamically selecting and adjusting the weights of each common feedforward neural network submodel and temporal convolutional network submodel based on a gating network, inputting deep fusion features into the MoE model, and generating prediction results. The present invention combines the diversified processing capabilities of the MoE model with the long time series analysis advantages of TCN, and improves the accuracy and response speed of prediction.

Description

Factory fire hazard safety hazard prediction method and system based on MoE large model
Technical Field
The invention relates to the technical field of safety precaution, in particular to a factory fire hazard prediction method and system based on a MoE large model.
Background
In the field of existing industrial safety monitoring, although conventional fire prevention systems are capable of detecting the occurrence of a fire to some extent, for example, by means of smoke detectors and temperature sensors, these systems often lack the ability to predict potential fire hazards and therefore cannot give early warning in advance. In addition, conventional methods tend to be inefficient in processing and analyzing large-scale and complex data, particularly equipment operation log information, and have difficulty meeting the requirements of modern factories for high accuracy and real-time response. Therefore, it is necessary to provide a method and a system for predicting fire safety hazards of factories based on a MoE large model, so as to solve the above problems.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a method and a system for predicting the fire safety hidden trouble of a factory based on a MoE large model so as to solve the problems existing in the background art.
The invention discloses a factory fire safety hidden danger prediction method based on a MoE large model, which comprises the following steps of:
Collecting multi-mode data, wherein the multi-mode data comprises environment monitoring data and equipment operation logs, and the equipment operation logs record the operation efficiency, the historical maintenance information and fault abnormality indexes of equipment;
extracting multi-mode data to obtain depth features, carrying out feature fusion through an attention network model, and outputting the depth fusion features;
Building a MoE model comprising a common feedforward neural network sub-model and a time convolution network sub-model, and dynamically selecting and adjusting the weight of each common feedforward neural network sub-model and each time convolution network sub-model based on a gating network;
and inputting the depth fusion characteristics into the MoE model, and reasoning to generate a prediction result.
According to the method, the multi-mode data are acquired through various sensors, the temperature sensor is used for monitoring the ambient temperature, the smoke detector is used for detecting the smoke concentration, the video monitoring is used for acquiring real-time visual data in a factory, and abnormal behaviors, flames and smoke characteristics are captured.
The method for extracting the multi-mode data to obtain the depth features comprises the following steps of:
based on the multi-mode data acquired by the LSTM analysis temperature sensor and the smoke detector, identifying abnormal modes of temperature and smoke concentration, and extracting corresponding time sequence characteristics;
flame and smoke features in visual data are extracted based on an edge feature detection function of a convolutional neural network.
The method for outputting the deep fusion features comprises the following steps of:
Outputting a mutually weighted attention score between modal features based on an attention network model, the attention network model being a multi-layer neural network;
And introducing a synchronization mechanism into the attention network model to ensure the consistency of the feature time before fusion, processing the dislocation of different mode data in time, and outputting the depth fusion feature.
The common feedforward neural network submodel comprises a plurality of layers of full-connection layers, wherein the first layer is an input layer, the input layer directly receives original data from a sensor, the second layer is a hidden layer, the hidden layer is used for extracting advanced features in the input data, each layer uses a ReLU activation function to enhance the capability of the network to handle nonlinear problems, and the last layer is an output layer, and the output layer converts the output of the hidden layer into corresponding prediction results.
As a further proposal of the invention, the time convolution network sub-model is used for analyzing data with time dependency, each extended convolution layer is followed by a batch normalization layer and a ReLU activation function, and the time convolution network sub-model also comprises a global average pooling layer which is used for aggregating the characteristics in the time dimension.
The gating network is composed of full connection layers, the weight of each sub-model contributing to final prediction is output, the weight is obtained through calculation of a softmax function, and the sum of all weights is ensured to be 1.
Another object of the present invention is to provide a factory fire safety hazard prediction system based on a MoE large model, the system comprising:
The system comprises a multi-mode data acquisition module, a monitoring module and a monitoring module, wherein the multi-mode data acquisition module is used for acquiring multi-mode data, the multi-mode data comprises environment monitoring data and equipment operation logs, and the equipment operation logs record the operation efficiency, history maintenance information and fault abnormality indexes of equipment;
the feature extraction and fusion module is used for extracting the multi-mode data to obtain depth features, carrying out feature fusion through the attention network model and outputting the depth fusion features;
The MoE large model construction module is used for constructing a MoE model comprising a common feedforward neural network sub-model and a time convolution network sub-model, and dynamically selecting and adjusting the weight of each common feedforward neural network sub-model and each time convolution network sub-model based on a gating network;
And the characteristic reasoning prediction module is used for inputting the depth fusion characteristic into the MoE model, reasoning and generating a prediction result.
The invention further provides a feature extraction and fusion module, which comprises:
The time sequence feature unit is used for identifying abnormal modes of temperature and smoke concentration based on the multi-mode data acquired by the LSTM analysis temperature sensor and the smoke detector and extracting corresponding time sequence features;
And the edge feature detection unit is used for extracting flame and smoke features in the visual data based on an edge feature detection function of the convolutional neural network.
The invention further provides a feature extraction and fusion module, which further comprises:
A weighted attention score unit for outputting a mutually weighted attention score between modal features based on an attention network model, the attention network model being a multi-layer neural network;
And the time dislocation processing unit is used for introducing a synchronization mechanism into the attention network model to ensure the consistency of the feature time before fusion, processing the dislocation of the data in different modes in time and outputting the depth fusion feature.
Compared with the prior art, the invention has the beneficial effects that:
The invention combines the diversified processing capacity of the MoE model and the long-time sequence analysis advantage of TCN, improves the prediction accuracy of fire hazard of the factory, remarkably improves the timeliness of early warning, provides a comprehensive safety monitoring solution for the modern factory, and greatly improves the safety management capacity and emergency response efficiency of the factory.
Drawings
FIG. 1 is a flow chart of a method for predicting fire safety hazards of a factory based on a MoE large model;
FIG. 2 is a flow chart of determining fusion features in a method for predicting fire safety hazards in a factory based on a MoE large model;
FIG. 3 is a schematic diagram of a plant fire hazard prediction system based on a MoE large model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
As shown in fig. 1, the embodiment of the invention provides a method for predicting fire safety hazards of a factory based on a MoE large model, which comprises the following steps:
S100, collecting multi-mode data, wherein the multi-mode data comprises environment monitoring data and equipment operation logs, and the equipment operation logs record the operation efficiency, the historical maintenance information and fault abnormality indexes of equipment;
S200, extracting multi-mode data to obtain depth features, carrying out feature fusion through an attention network model, and outputting the depth fusion features;
S300, building a MoE model comprising a common feedforward neural network sub-model and a time convolution network sub-model, and dynamically selecting and adjusting the weight of each common feedforward neural network sub-model and each time convolution network sub-model based on a gating network;
S400, inputting the depth fusion features into the MoE model, and reasoning to generate a prediction result.
It should be noted that:
In the embodiment of the invention, the multi-mode data is acquired through a plurality of sensors, the temperature sensor is used for monitoring the environmental temperature, the smoke detector is used for detecting the smoke concentration, the video monitoring is used for acquiring real-time visual data in a factory, and the abnormal behavior, flame and smoke characteristics are captured. The multi-mode data is subjected to format unification and noise filtration through a high-efficiency preprocessing algorithm, and preliminary characteristic representation is obtained. And then, the multi-mode data is required to be extracted to obtain depth features, the feature fusion is carried out through the attention network model, and the depth fusion features are output. Then building a MoE model comprising a common feedforward neural network (FFN) sub-model and a Time Convolution Network (TCN) sub-model, and dynamically selecting and adjusting the weight of each common feedforward neural network sub-model and each time convolution network sub-model based on a gating network;
The FFN sub-model specifically processes real-time data from devices such as temperature sensors and smoke detectors, which typically do not contain time series information. The FFN structure includes multiple levels of fully connected layers. In particular, the first layer is the input layer, directly receiving raw data from the sensor, such as temperature values or smoke concentrations, etc. The following hidden layers are used to extract high-level features in these input data, each layer using a ReLU activation function to enhance the ability of the network to handle non-linearity issues. The last layer is the output layer, which converts the output of the hidden layer into a corresponding prediction result, such as fire risk assessment. The FFN has the structural advantages of rapid processing and response, and can immediately identify burst abnormal data, which is important for an instant early warning system. The TCN submodel is used to analyze data that has a time dependency, such as device travel logs. These logs typically record the operational status and historical behavior of the equipment, the time series analysis of which is critical to predicting equipment failure and potential fire risk. A key feature of TCN is that extended convolutions can be utilized that can expand the receptive field of the network without significantly increasing computational complexity. Each extended convolution layer is typically followed by a batch normalization layer and a ReLU activation function to stabilize the training process and enhance the predictive power of the model. The last of the TCNs is typically a global averaging pooling layer that aggregates features in the time dimension and finally an output layer converts these features into specific prediction outputs.
The gating network is the core of the MoE model, which is responsible for dynamically selecting the appropriate sub-models and adjusting their output weights. The gating network itself is made up of fully connected layers, with inputs from the FFN and TCN submodels and outputs being weights for each submodel contribution to the final prediction, calculated by softmax function, ensuring that all weights sum to 1. The output weight of the gating network determines the influence degree of each sub-model output on the final prediction result, so that the model can flexibly adjust the strategy according to the characteristics of the current data. According to the invention, the FFN processes the simple instant data characteristics, the TCN deeply analyzes the complex time sequence data, the gating network adjusts the weights of all sub-models according to the real-time data conditions, and the whole system can monitor and predict the fire hazard of the factory in real time in multiple dimensions. The multi-mode and multi-technical-level comprehensive utilization not only greatly improves the prediction accuracy, but also enhances the adaptability and response capability of the system to various scenes, and provides powerful technical support for the safety management of factories.
And finally, reasoning the input characteristics through the built MoE model to achieve the effect of prediction, wherein the mixed expert model (MoE) comprises a plurality of FFN and TCN sub-models, and the outputs of the sub-models are integrated through a highly flexible gating network. The gating network not only dynamically selects and adjusts the weights of each sub-model, but also optimizes the weights in real time according to the changes of the real-time data flow, so as to ensure that the model output always reflects the latest environmental condition. The design ensures that the MoE model not only can accurately predict potential fire hazards, but also can adapt to various sudden changes, and improves the robustness and generalization capability of the model in complex industrial environments. In addition, the system also comprises an advanced real-time monitoring and response module. This module continuously monitors environmental data collected from various sensors, such as temperature fluctuations and smoke concentration changes. Based on the real-time reasoning result of the MoE model, the module can dynamically adjust the monitoring strategy and response parameters. For example, if the model predicts an increase in fire risk in a particular area, the monitoring module may automatically increase the scanning frequency of the area sensor, increase the density of data acquisition, or adjust the alarm threshold of the early warning system. Under the condition that the fire risk is determined to be high, the system can automatically trigger an audible and visual alarm, and rapidly inform plant management personnel and emergency response team to perform corresponding disaster prevention preparation through an automatic communication system.
According to the embodiment of the invention, the efficiency of fire disaster prevention and treatment is greatly improved through an intelligent dynamic adjustment mechanism, and the safety of a factory and the continuity of production are ensured. With such a highly automated monitoring and response system, the factory can be effectively faced with potential fire hazards at an initial stage, thereby significantly reducing the risk of disaster and the possible losses.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of extracting the multi-mode data to obtain depth features specifically includes:
S201, identifying abnormal modes of temperature and smoke concentration based on multi-mode data acquired by an LSTM analysis temperature sensor and a smoke detector, and extracting corresponding time sequence features;
S202, extracting flame and smoke characteristics in visual data based on an edge characteristic detection function of a convolutional neural network.
The step of carrying out feature fusion through the attention network model and outputting the depth fusion features specifically comprises the following steps:
S203, outputting a mutual weighted attention score among the modal features based on an attention network model, wherein the attention network model is a multi-layer neural network;
S204, introducing a synchronization mechanism into the attention network model to ensure the consistency of feature time before fusion, processing the dislocation of different mode data in time, and outputting the depth fusion feature.
In the embodiment of the invention, in order to fully mine and utilize the rich information of the multi-mode data, deep feature learning is carried out on the input features, namely, the aim of extracting high-level features which are beneficial to fire hidden danger prediction from each mode data is achieved. For example, features are extracted from different data sources, temperature sensor data, smoke detector data, and device travel logs, using long short term memory networks (LSTM), which is a network particularly useful for processing and predicting important events that are long-spaced and delayed in time series data. LSTM is used to analyze data from temperature sensors and smoke detectors. The data provided by these sensors typically include time-stamped recorded temperature and smoke concentration changes, and the LSTM is able to learn the time dependence in these data to identify abnormal patterns of temperature and smoke concentration that tend to be predictive of a potential fire occurrence. LSTM effectively manages long-term dependencies of information through its internal states and gating mechanisms, which makes it very effective in capturing dynamic changes in time series data. It can identify from historical data the tendency of temperature rise or sudden increase in smoke concentration, which are important indicators of a potential fire. The corresponding time series features are extracted using a long short term memory network (LSTM). Video surveillance data-visual features of flames and smoke are extracted using Convolutional Neural Networks (CNNs). For video surveillance data, CNN is used to extract visual features, particularly visual signs of flames and smoke. CNNs can efficiently process image data through their multi-layer structure, detecting more complex shapes and object recognition from basic edges. In fire monitoring and prediction, CNN can accurately identify and locate flames and smoke in video by learning visual features of different levels. For example, a first layer of CNN may recognize simple shapes and edges, while deeper networks can resolve complex objects such as dynamic changes in flames and modes of smoke propagation.
And then, carrying out weighted fusion on the extracted depth features through an attention mechanism, and carrying out synthesis and fusion on the depth features extracted by different modes through a cross-mode attention mechanism to form a multi-dimensional feature representation. The attention mechanism employs a multi-layer neural network, with inputs being features from each modality and outputs being mutually weighted attention scores between the modality features. And weighting and combining the modal features according to the learned attention weights so as to emphasize the features which are more critical to the current task. For example, the system can automatically increase the weight of other sensor features when visual data quality is affected by light or occlusion. In addition, in order to deal with the dislocation of different mode data in time, a synchronous mechanism is introduced to ensure the consistency of characteristic time before fusion. In this way, the fused features are rich and global, and the overall understanding capability of the model to the scene is remarkably improved. Thus, comprehensive high-level characteristics are generated to improve the accuracy and the robustness of fire hazard prediction.
As shown in fig. 3, the embodiment of the invention further provides a system for predicting fire safety hidden danger of a factory based on a MoE large model, which comprises:
The multi-mode data acquisition module 100 is used for acquiring multi-mode data, wherein the multi-mode data comprises environment monitoring data and equipment operation logs, and the equipment operation logs record the operation efficiency, history maintenance information and fault abnormality indexes of equipment;
The feature extraction and fusion module 200 is used for extracting multi-mode data to obtain depth features, carrying out feature fusion through the attention network model, and outputting the depth fusion features;
the MoE large model construction module 300 is configured to construct a MoE model including a common feedforward neural network sub-model and a time convolution network sub-model, and dynamically select and adjust weights of each common feedforward neural network sub-model and the time convolution network sub-model based on a gating network;
the feature reasoning prediction module 400 is configured to input the depth fusion feature into the MoE model, perform reasoning, and generate a prediction result.
As a preferred embodiment of the present invention, the feature extraction and fusion module 200 includes:
The time sequence feature unit is used for identifying abnormal modes of temperature and smoke concentration based on the multi-mode data acquired by the LSTM analysis temperature sensor and the smoke detector and extracting corresponding time sequence features;
And the edge feature detection unit is used for extracting flame and smoke features in the visual data based on an edge feature detection function of the convolutional neural network.
As a preferred embodiment of the present invention, the feature extraction and fusion module 200 further includes:
A weighted attention score unit for outputting a mutually weighted attention score between modal features based on an attention network model, the attention network model being a multi-layer neural network;
And the time dislocation processing unit is used for introducing a synchronization mechanism into the attention network model to ensure the consistency of the feature time before fusion, processing the dislocation of the data in different modes in time and outputting the depth fusion feature.
The foregoing description of the preferred embodiments of the present invention should not be taken as limiting the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

CN202411089656.5A2024-08-092024-08-09 Factory fire safety hazard prediction method and system based on MoE large modelPendingCN119090053A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411089656.5ACN119090053A (en)2024-08-092024-08-09 Factory fire safety hazard prediction method and system based on MoE large model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411089656.5ACN119090053A (en)2024-08-092024-08-09 Factory fire safety hazard prediction method and system based on MoE large model

Publications (1)

Publication NumberPublication Date
CN119090053Atrue CN119090053A (en)2024-12-06

Family

ID=93694767

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411089656.5APendingCN119090053A (en)2024-08-092024-08-09 Factory fire safety hazard prediction method and system based on MoE large model

Country Status (1)

CountryLink
CN (1)CN119090053A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119885042A (en)*2025-03-262025-04-25深圳地球大师科技有限公司Method, device, equipment and medium for monitoring abnormality of plasma oven
CN120123829A (en)*2025-05-142025-06-10北京安宁威尔应急消防安全科技有限公司 Method and device for identifying fire hazards through AI visual analysis technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119885042A (en)*2025-03-262025-04-25深圳地球大师科技有限公司Method, device, equipment and medium for monitoring abnormality of plasma oven
CN120123829A (en)*2025-05-142025-06-10北京安宁威尔应急消防安全科技有限公司 Method and device for identifying fire hazards through AI visual analysis technology

Similar Documents

PublicationPublication DateTitle
CN118072255B (en)Intelligent park multisource data dynamic monitoring and real-time analysis system and method
CN119090053A (en) Factory fire safety hazard prediction method and system based on MoE large model
CN119004367A (en)Intelligent target system data monitoring method and system based on large model
CN118097918B (en)Intelligent monitoring and alarming method and system for fire fighting fire disaster
CN115358155A (en)Power big data abnormity early warning method, device, equipment and readable storage medium
CN119089351A (en) A data processing method and system based on industrial Internet
CN119479230A (en) A linkage early warning method based on security situation assessment and its application
CN117695573A (en)Wireless temperature measurement early warning type high-voltage control system carrying automatic fire extinguishing protection system
Shirshahi et al.Diagnosing root causes of faults based on alarm flood classification using transfer entropy and multi-sensor fusion approaches
CN119691568A (en)Sewage treatment process abnormal condition identification method and system based on deep neural network
CN117366661A (en)Heating system fault data analysis method, system, equipment and storage medium
CN119809312A (en) A method and system for establishing real-time operation risk warning of cascade power plants
CN119025870A (en) Multimodal artificial intelligence quality defect prediction method
CN120279689A (en)Comprehensive safety monitoring alarm system and fault prediction method for distribution room
CN120234665A (en) A method and system for monitoring power safety hazards based on power consumption data cloud platform
CN119133655A (en) A battery thermal runaway focusing control method
CN120065972A (en)Intelligent monitoring disc system based on multi-mode data fusion and fault early warning method
JP7678257B2 (en) Intelligent anomaly detection device and its control method
CN118071140A (en)Data processing method and device, electronic equipment and storage medium
CN118040889A (en)Intelligent monitoring method and system for power distribution room
Zhang et al.Is the real-time data of process safety reliable? An anomaly detection method based on the graph neural network
CN120086775B (en) A method, system, device and medium for unsupervised anomaly detection of industrial Internet of Things devices based on adaptive deep representation learning
CN116338113B (en)Environment intelligent perception simulation system, method, server and storage medium
CN120106317A (en) Electrical fire prediction method and system based on Internet of Things
CN120564323A (en)Fire disaster early warning method and device based on recognition algorithm and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp