Movatterモバイル変換


[0]ホーム

URL:


CN112836760A - Wearable device-based manual assembly task performance recognition system and method - Google Patents

Wearable device-based manual assembly task performance recognition system and method
Download PDF

Info

Publication number
CN112836760A
CN112836760ACN202110192232.1ACN202110192232ACN112836760ACN 112836760 ACN112836760 ACN 112836760ACN 202110192232 ACN202110192232 ACN 202110192232ACN 112836760 ACN112836760 ACN 112836760A
Authority
CN
China
Prior art keywords
signal
assembly
data
arm
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110192232.1A
Other languages
Chinese (zh)
Inventor
马靓
张占武
傅佳伟
曹柳星
孟国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Henan Yuzhan Precision Technology Co Ltd
Original Assignee
Tsinghua University
Henan Yuzhan Precision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Henan Yuzhan Precision Technology Co LtdfiledCriticalTsinghua University
Priority to CN202110192232.1ApriorityCriticalpatent/CN112836760A/en
Publication of CN112836760ApublicationCriticalpatent/CN112836760A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种基于可穿戴设备的手工装配任务绩效识别系统,包括显示终端、可穿戴设备和计算终端,其中所述显示终端包括交互装置,用于接受操作输入,以及反馈评估结果;所述计算终端用于对反映操作者装配操作状态的信号数据进行过滤与统计特征提取,以及监测评估;所述监测评估的过程使用至少一种机器学习方法和网络学习模型;所述可穿戴设备用于在进行装配操作时穿戴在操作者身上,其与所述显示终端和计算终端建立数据连接,能够采集所述信号数据,并发送给所述计算终端,以及接收所述计算终端的检测评估结果,并通过所述显示终端反馈给操作者。本发明实现了产线装配绩效的实时评估,并提高了准确率。

Figure 202110192232

The present invention provides a wearable device-based manual assembly task performance recognition system, including a display terminal, a wearable device and a computing terminal, wherein the display terminal includes an interaction device for accepting operation input and feeding back evaluation results; The computing terminal is used for filtering and statistical feature extraction of signal data reflecting the operator's assembly operation state, and monitoring and evaluating; the process of monitoring and evaluating uses at least one machine learning method and a network learning model; the wearable device uses It is worn on the operator during the assembly operation, establishes a data connection with the display terminal and the computing terminal, can collect the signal data, send it to the computing terminal, and receive the detection and evaluation results of the computing terminal. , and fed back to the operator through the display terminal. The invention realizes the real-time evaluation of the assembly performance of the production line and improves the accuracy.

Figure 202110192232

Description

System and method for identifying performance of manual assembly task based on wearable equipment
Technical Field
The invention relates to a performance identification system, in particular to a performance identification system for a manual assembly task based on a wearable device.
Background
With the development of social economy, the number of large-scale production assembly enterprises is increasing day by day, and the population of production line assembly operators is expanding day by day. As an important member of a production assembly system, assembly workers are often affected by field fatigue at the workplace. Studies have shown that individuals experience a decline in cognitive ability and manual performance in fatigue states, including stability during movement, flexibility of the fingers and fingertips, and the like. The yield and quality of the product are also affected by the physiological fatigue of the operators, and there is evidence that 20-40% of quality problems are caused by human error in operations requiring delicate manual work.
For companies with a large number of employees, work efficiency reduction, product quality reduction, lack of duty, and the like due to physiological fatigue are not negligible. In order to improve the current situation, besides improving the management mode and the management standard of the production field and strengthening the training of technical workers, the realization of real-time monitoring on the assembly performance through scientific and technological means has become a consensus of production enterprises and engineering technology.
In recent years, various attempts and researches are made for realizing real-time supervision on assembly performance, but in the past, radio frequency identification technologies such as RFID are generally adopted to label materials, and after an assembly product passes through a quality inspection work station, assembly performance data are synchronized to the cloud end so as to be fed back. This feedback mode lacks attention from the assembler himself and is slow. How to realize the real-time evaluation of the assembly performance of the production line becomes a technical problem to be solved urgently at present.
Disclosure of Invention
The invention aims to provide an identification system capable of predicting the assembly efficiency of an assembly worker in real time, which takes physiological signals of the assembly worker as input to predict the assembly efficiency.
The technical scheme of the invention is as follows.
The invention provides a manual assembly task performance identification system based on a wearable device in a first aspect, which comprises a display terminal, the wearable device and a computing terminal, wherein the display terminal, the wearable device and the computing terminal are arranged in the system
The display terminal comprises an interaction device used for receiving operation input and feeding back an evaluation result;
the computing terminal is used for filtering and extracting statistical characteristics of signal data reflecting the assembling operation state of an operator, and monitoring and evaluating the signal data; the monitoring and evaluating process uses at least one machine learning method and a network learning model;
the wearable device is used for being worn on the body of an operator during assembly operation, establishing data connection with the display terminal and the computing terminal, acquiring signal data, sending the signal data to the computing terminal, receiving a detection evaluation result of the computing terminal, and feeding the result back to the operator through the display terminal.
Preferably, the signal data acquired by the wearable device comprises an arm muscle surface electromyographic signal of the operator.
Preferably, the signal data further includes an arm movement acceleration signal, an arm movement angular velocity signal, and an arm rotation angle signal.
A second aspect of the present invention provides a performance evaluation method using the performance recognition system for a manual assembly task according to any one of the first aspects of the present invention, including the steps of:
step S1, inputting operator information and evaluation algorithm type through an interactive device of the display terminal, and collecting signal data reflecting the assembling operation state of the operator through the wearable equipment;
step S2, transmitting the signal data to a computing terminal;
step S3, filtering and extracting statistical characteristics of the signal data, monitoring and evaluating, and sending the evaluation result to a display terminal; the monitoring and evaluating process uses at least one machine learning method and a network learning model according to the input evaluation algorithm type;
and step S4, displaying corresponding performance indexes according to the evaluation result through an interactive device.
Preferably, the process of filtering and extracting statistical features from the signal data in step S3 includes the following steps:
s3.1, extracting signal data of a specific assembly task according to the starting time and the ending time of each assembly task;
s3.2, performing noise reduction smoothing processing on the myoelectric signals of the surfaces of the muscles of the arms by using a first filter, and performing noise reduction smoothing processing on the acceleration signals of the movement of the arms and the angular velocity signals of the arms by using a second filter;
s3.3, dividing the processed arm muscle surface electromyographic signals by utilizing sliding time windows, and calculating the mean value and the root mean square of the signals in each time window to obtain an arm muscle surface electromyographic signal mean value time sequence and an arm muscle surface electromyographic signal root mean square time sequence;
step S3.4, dividing the processed arm motion acceleration signals, arm angular velocity signals and arm corner signals by utilizing a sliding time window, and calculating the mean value and the root mean square of the signals in each time window, thereby obtaining two groups of time sequences for each signal, namely an arm motion acceleration signal mean value time sequence, an arm motion acceleration signal root mean square time sequence, an arm angular velocity signal mean value time sequence, an arm angular velocity signal root mean square time sequence, an arm corner signal mean value time sequence and an arm corner signal root mean square time sequence;
and S3.5, extracting the parameter distribution in the description period of the description statistic of each time sequence.
Preferably, the first filter in step S3.2 is a fourth order Butterworth filter of 30Hz, and the second filter is a third order median filter.
Preferably, the sliding time window in step S3.3 has a length of 0.25S and the overlap length of 0.075S.
Preferably, the sliding time window in step S3.4 has a length of 0.3S and the overlap length of 0.1S.
Preferably, the description statistics in step S3.5 include: mean, variance, median, mode, kurtosis, and skewness.
The third aspect of the present invention provides a training method for a network model in the performance recognition system based on the hand-assembly task of the wearable device according to any one of the first aspect of the present invention, comprising the following steps:
step S01, constructing an ASSEMBLY data set ASSEMBLY by using the operator information and the data collected by the wearable device in the step S1;
step S02, labeling all data of the data set ASSEMBLY partially to form an ASSEMBLY _ Output of the performance judged manually; meanwhile, forming a data set ASSEMBLY _ Input for the wearable device to acquire the physiological signal characteristics;
step S03, randomly splitting data in the data Set ASSEMBLY, wherein 70% of the data form a training data Set Train _ Set, and the other 30% of the data form a testing data Set Test _ Set;
step S04, training by using two machine learning methods GBDT and LDA and a RNN deep learning network model through a training data Set Train _ Set to obtain models GBDT _ trained, LDA _ trained and RNN _ trained;
step S05, obtaining an assessment data Output Set ASSEMBLY _ Trained _ Output after classifying the Input of ASSEMBLY _ Input and GBDT _ trailing, LDA _ trailing and RNN _ trailing models in the training data Set Trained _ Set;
step S06, using errors of ASSEMBLY _ TRAINED _ OUTPUT in the performance evaluation data Output Set ASSEMBLY _ TRAINED _ OUTPUT and the ASSEMBLY _ OUTPUT in the training data Set Train _ Set to enhance GBDT _ trailing, LDA _ trailing and RNN _ trailing, and obtaining enhanced machine learning models GBDT _ inproved, LDA _ inproved and RNN _ inproved;
step S07, testing the performance of the enhanced machine learning models GBDT _ improved, LDA _ improved and RNN _ improved by using a Test data Set Test _ Set, and finishing training if the Test result is qualified; if not, the parameters of the three network models are adjusted, and the process is executed again from step S03.
Through the technical scheme, the invention can obtain the following beneficial effects.
The work activity information of an operator is collected by using wearable equipment (including sensor equipment configured on a workstation), and is further identified and recorded through a machine learning algorithm and communication equipment, so that the physiological fatigue in a workplace is identified in real time and is intervened.
The invention compares 6 types of algorithm models under three assembly scenes in a production line, screens out three machine learning and deep learning network models with the highest prediction accuracy, and compared with other machine learning models, the model used in the invention has the advantages that: the prediction accuracy of LDA machine learning in a stability task station in a production assembly line is up to more than 90%, and the prediction accuracy of other machine learning models is only about 80%; the prediction accuracy of the RNN deep learning model in a flexible task station in a production assembly line is up to more than 88%, and the prediction accuracy of other machine learning models is only about 70%; the prediction accuracy of the GBDT machine learning model in a screw assembly station in a production assembly line is up to more than 99%, and the prediction accuracy of other machine learning models is only about 87%.
Drawings
FIG. 1 is a partial block diagram of embodiment 1 of the present invention;
FIG. 2 is a flowchart of embodiment 2 of the present invention;
FIG. 3 is a flow chart of a signal filtering and statistical feature extraction method according to embodiment 3 of the present invention;
fig. 4 is a flowchart of a deep learning network model building method according to embodiment 3 of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described below with reference to the accompanying drawings. It should be understood that the preferred embodiments described herein are for purposes of illustration and explanation only and are not intended to limit the present invention.
Example 1
The embodiment provides a wearable manual assembly task performance identification system which comprises a display terminal, wearable equipment and a computing terminal, wherein the wearable equipment is connected with the computing terminal and the display terminal through Bluetooth.
As shown in fig. 1, the display terminal includes an interactive device for accepting an operation input and feeding back an evaluation result. The interface of the interaction device comprises a worker basic information acquisition module, a real-time data display module, a classification algorithm selection module and an evaluation result feedback module.
The computing terminal comprises a computer provided with performance recognition software and is used for filtering and extracting statistical characteristics of signal data reflecting the assembling operation state of an operator, monitoring and evaluating; the process of monitoring and evaluating uses at least one machine learning method and a network learning model. The performance identification software comprises a signal filtering module, a statistical feature extraction module and a monitoring and evaluating module, wherein the monitoring and evaluating module comprises a machine learning and deep learning network model. The output end of the signal filtering module is connected with the input end of the statistical characteristic extraction module, and the output end of the statistical characteristic extraction module is connected with the input end of the monitoring evaluation module.
The wearable device is used for being worn on the body of an operator during assembly operation, is in data connection with the display terminal and the computing terminal, can collect signal data, sends the signal data to the computing terminal, receives a detection evaluation result of the computing terminal, and feeds the detection evaluation result back to the operator through the display terminal. The wearable device is connected with the input end of the real-time data display module of the display terminal through data connection on one hand, and is connected with the input end of the signal filtering module of the computing terminal through data connection on the other hand.
In a preferred embodiment, the computing terminal is implemented by a desktop computer with a bluetooth function and a Matlab platform, the display terminal is a touch display with a UI graphical interface based on the Matlab platform, and the wearable device is a Myo bracelet. Those skilled in the art will appreciate that any suitable implementation known in the art may be used with the present invention, and may be readily changed by the user if desired. In particular, the wearable device for the present invention must contain metal electrodes capable of acquiring surface myoelectricity and IMU sensors capable of acquiring arm movement signals.
Example 2
The present embodiment provides an assembly performance evaluation method. The method of the present embodiment is implemented by the system of embodiment 1. As shown in fig. 2, the method of the present embodiment includes the following steps performed in sequence.
And step S1, data acquisition. Basic information such as the age, the sex, the height and the weight of an assembly worker is acquired through a worker basic information acquisition module of an interaction device of the display terminal, and surface myoelectricity, arm movement triaxial acceleration, arm movement triaxial angular velocity data, arm movement triaxial angle and the like of the assembly worker are acquired in real time through wearable equipment and transmitted to the computing terminal and the display terminal. A user selects a specific evaluation algorithm type from the three machine learning and deep learning network models by using a classification algorithm selection module.
Step S2, data transmission. And sending the data collected in the step S1 to a signal filtering module included in the computing terminal by means of data transmission such as bluetooth.
Step S3, data processing. The data processing process in the step comprises the following three aspects: the signal filtering module carries out noise reduction smoothing processing on the received data, the statistical characteristic extraction module carries out segmentation processing on the smooth data passing through the signal filtering module into two groups of time sequences by using a sliding time window method, and then the statistical characteristics of the two groups of time sequences are obtained and input to the monitoring and evaluation module; and the monitoring and evaluating module performs performance detection on the arm muscle surface myoelectricity, the arm movement acceleration data, the arm angular velocity data and the arm corner data in the received data by using the selected evaluating algorithm, outputs an evaluation value and outputs the evaluation value on a screen through a display terminal.
As shown in fig. 3, the signal filtering and statistical feature extraction includes the following steps.
And S3.1, extracting four input signals of a specific assembly task by using the start time and the end time of each assembly task recorded by the assembly system, and recording the four input signals as an arm muscle surface electromyogram signal _ i, an arm motion acceleration signal _ i, an arm angular velocity signal _ i and an arm rotation angle signal _ i.
And S3.2, performing noise reduction smoothing processing on the arm muscle surface electromyographic signal _ i by using a 30Hz fourth-order Butterworth filter to obtain an arm muscle surface electromyographic signal _ i _ filtered, and performing noise reduction smoothing processing on the arm motion acceleration signal _ i and the arm angular velocity signal _ i by using a third-order median filter to obtain an arm motion acceleration signal _ i _ filtered, an arm angular velocity signal _ i _ filtered and an arm corner signal _ i _ filtered.
And S3.3, dividing the arm muscle surface electromyographic signals _ i _ filtered by using a sliding time window, and calculating a mean value and a root mean square of the signals in each time window to obtain two groups of time sequences, namely the arm muscle surface electromyographic signals _ i _ filtered _ average and the arm muscle surface electromyographic signals _ i _ filtered _ RMS. In a preferred embodiment, the time window length is 0.25s and the overlap length is 0.075 s.
Step S3.4, the sliding time window is used to divide the arm motion acceleration signal _ i _ filtered, the arm angular velocity signal _ i _ filtered, and the arm rotation angle signal _ i _ filtered, and the mean and root mean square of the signals are calculated in each time window, so as to obtain two sets of time sequences for each signal, namely, the arm motion acceleration signal _ i _ filtered _ average, the arm motion acceleration signal _ i _ filtered _ RMS, the arm angular velocity signal _ i _ filtered _ average, the arm angular velocity signal _ i _ filtered _ RMS, the arm rotation angle signal _ i _ filtered _ average, and the arm rotation angle signal _ i _ filtered _ RMS. In a preferred embodiment, the time window length is 0.3s and the overlap length is 0.1 s.
Step S3.5, extracting the parameter distribution in the description period of 6 description statistics of each time sequence, wherein the parameter distribution comprises the following steps: mean, variance, median, mode, kurtosis, and skewness as outputs of the signal filtering and statistical features.
Step S4, performance assessment feedback. The evaluation result feedback module of the display terminal displays the corresponding performance index, namely high, medium or low according to the received evaluation value.
Example 3
The embodiment provides a training method of the machine learning and deep learning network model in embodiment 1.
As shown in fig. 4, the method for training a machine learning and deep learning network model includes the following steps.
In step S01, an asset data set is provided, that is, the data collected by the basic information collection module of the worker and the wearable device in step S1 includes the age, sex, height, weight, muscle surface myoelectricity of the arm, arm movement acceleration data, arm angular velocity data, arm rotation angle, etc. of the worker.
Step S02, manually labeling all data of the ASSEMBLY data set to form an ASSEMBLY data set for manually judging performance, namely an ASSEMBLY _ Output data set; meanwhile, a data set of the wearable device for acquiring the physiological signal characteristics is formed, namely the data set ASSEMBLY _ Input.
Step S03, randomly splitting the asset data Set into 70%, wherein 70% of the asset data Set constitutes a training Set Train _ Set, and 30% of the asset data Set constitutes a Test Set Test _ Set.
Step S04, training by using two machine learning methods GBDT and LDA and an RNN deep learning model and a data Set Train _ Set to obtain models GBDT _ trained, LDA _ trained and RNN _ trained;
step S05, classifying the contribution _ Input and GBDT _ Trained, LDA _ Trained, RNN _ Trained models in the data Set Train _ Set to obtain a performance evaluation data Output Set, i.e. the data Set contribution _ Trained _ Output.
Step S06, using errors of the data sets ASSEMBLY _ Trained _ Output and ASSEMBLY _ Output to enhance GBDT _ trailing, LDA _ trailing and RNN _ trailing, and obtaining enhanced machine learning models GBDT _ enhanced, LDA _ enhanced and RNN _ enhanced;
step S07, testing the performance of the GBDT _ improved, LDA _ improved and RNN _ improved assembly performance evaluation machine learning and deep learning network models by using a data Set Test _ Set, and ending the process of training the machine learning and deep learning network models if the Test result is qualified; if not, the parameters of the three types of machine learning and deep learning network models are adjusted, and the process is executed again from step S03.
The existing machine learning and deep learning network models are various in types, and common models comprise SVM \ KNN \ LDA \ GBDT \ RNN \ Boost and the like, the invention compares 6 types of algorithm models under three assembly scenes in a production line, screens out three machine learning and deep learning network models with the highest prediction accuracy, and compared with other machine learning models, the model used in the invention has greater advantages: the prediction accuracy of LDA machine learning in a stability task station in a production assembly line is up to more than 90%, and the prediction accuracy of other machine learning models is only about 80%; the prediction accuracy of the RNN deep learning model in a flexible task station in a production assembly line is up to more than 88%, and the prediction accuracy of other machine learning models is only about 70%; the prediction accuracy of the GBDT machine learning model in a screw assembly station in a production assembly line is up to more than 99%, and the prediction accuracy of other machine learning models is only about 87%.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents and are included in the scope of the present invention.

Claims (10)

Translated fromChinese
1.一种基于可穿戴设备的手工装配任务绩效识别系统,包括显示终端、可穿戴设备和计算终端,其中1. A wearable device-based manual assembly task performance recognition system, comprising a display terminal, a wearable device and a computing terminal, wherein所述显示终端包括交互装置,用于接受操作输入,以及反馈评估结果;The display terminal includes an interactive device for accepting operation input and feeding back an evaluation result;所述计算终端用于对反映操作者装配操作状态的信号数据进行过滤与统计特征提取,以及监测评估;所述监测评估的过程使用至少一种机器学习方法和网络学习模型;The computing terminal is used for filtering and statistical feature extraction of signal data reflecting the operator's assembling operation state, and monitoring and evaluating; the process of monitoring and evaluating uses at least one machine learning method and network learning model;所述可穿戴设备用于在进行装配操作时穿戴在操作者身上,其与所述显示终端和计算终端建立数据连接,能够采集所述信号数据,并发送给所述计算终端,以及接收所述计算终端的检测评估结果,并通过所述显示终端反馈给操作者。The wearable device is used to be worn on the operator during assembly operations, establishes a data connection with the display terminal and the computing terminal, and can collect the signal data, send it to the computing terminal, and receive the The detection and evaluation results of the computing terminal are fed back to the operator through the display terminal.2.根据权利要求1所述的一种基于可穿戴设备的手工装配任务绩效识别系统,其特征在于,所述可穿戴设备采集的所述信号数据包括所述操作者的手臂肌肉表面肌电信号。2 . The wearable device-based manual assembly task performance recognition system according to claim 1 , wherein the signal data collected by the wearable device comprises the surface electromyographic signal of the operator’s arm muscles. 3 . .3.根据权利要求2所述的一种基于可穿戴设备的手工装配任务绩效识别系统,其特征在于,所述信号数据还包括手臂运动加速度信号、手臂运动角速度信号、手臂转角信号。3 . The wearable device-based manual assembly task performance recognition system according to claim 2 , wherein the signal data further comprises an arm motion acceleration signal, an arm motion angular velocity signal, and an arm rotation angle signal. 4 .4.一种采用根据权利要求1-3中任一项所述的手工装配任务绩效识别系统的绩效评估方法,其特征在于,包括如下步骤:4. a performance evaluation method employing the manual assembly task performance identification system according to any one of claims 1-3, is characterized in that, comprises the steps:步骤S1,通过显示终端的交互装置输入操作者信息和评估算法类型,并且通过可穿戴设备采集反映操作者装配操作状态的信号数据;Step S1, input operator information and evaluation algorithm type through the interactive device of the display terminal, and collect signal data reflecting the operator's assembly operation state through the wearable device;步骤S2,将所述信号数据发送到计算终端;Step S2, sending the signal data to the computing terminal;步骤S3,对所述信号数据进行过滤与统计特征提取,以及监测评估,并且将评估结果发送给显示终端;所述监测评估的过程按照输入的评估算法类型使用至少一种机器学习方法和网络学习模型;Step S3, performing filtering and statistical feature extraction on the signal data, and monitoring and evaluating, and sending the evaluation result to the display terminal; the process of monitoring and evaluating uses at least one machine learning method and network learning method according to the input evaluation algorithm type. Model;步骤S4,通过交互装置根据所述评估结果显示相应的绩效指标。Step S4, displaying the corresponding performance index according to the evaluation result through the interactive device.5.根据权利要求4所述的绩效评估方法,其特征在于,所述步骤S3中对所述信号数据进行过滤与统计特征提取的过程包括以下步骤:5. The performance evaluation method according to claim 4, wherein the process of filtering and statistical feature extraction on the signal data in the step S3 comprises the following steps:步骤S3.1,按照每次装配任务的开始和结束时间将特定装配任务的信号数据提取出来;Step S3.1, extracting the signal data of a specific assembly task according to the start and end time of each assembly task;步骤S3.2,利用第一滤波器对手臂肌肉表面肌电信号进行降噪平滑处理,利用第二滤波器对手臂运动加速度信号和手臂角速度信号进行降噪平滑处理;Step S3.2, using the first filter to perform noise reduction and smoothing processing on the arm muscle surface EMG signal, and using the second filter to perform noise reduction and smoothing processing on the arm motion acceleration signal and the arm angular velocity signal;步骤S3.3,利用滑动时间窗对处理后的手臂肌肉表面肌电信号进行划分,在每个时间窗内计算信号的均值与均方根,由此获得手臂肌肉表面肌电信号均值时间序列、手臂肌肉表面肌电信号均方根时间序列;Step S3.3, use the sliding time window to divide the processed arm muscle surface EMG signal, and calculate the mean value and root mean square of the signal in each time window, thereby obtaining the arm muscle surface EMG signal mean time series, The root mean square time series of the surface EMG signal of the arm muscle;步骤S3.4,利用滑动时间窗对处理后的手臂运动加速度信号、手臂角速度信号、手臂转角信号进行划分,在每个时间窗内计算信号的均值与均方根,由此对每种信号获得两组时间序列,手臂运动加速度信号均值时间序列、手臂运动加速度信号均方根时间序列、手臂角速度信号均值时间序列、手臂角速度信号均方根时间序列、手臂转角信号均值时间序列、手臂转角信号均方根时间序列;Step S3.4, use the sliding time window to divide the processed arm motion acceleration signal, arm angular velocity signal, and arm rotation angle signal, and calculate the mean value and root mean square of the signals in each time window, thereby obtaining each signal. Two groups of time series, arm motion acceleration signal mean time series, arm motion acceleration signal root mean square time series, arm angular velocity signal mean time series, arm angular velocity signal root mean square time series, arm rotation angle signal mean time series, arm rotation angle signal mean time series square root time series;步骤S3.5,提取每个时间序列的描述统计量描述周期内的参数分布。Step S3.5, extracting the descriptive statistics of each time series to describe the parameter distribution in the period.6.根据权利要求5所述的绩效评估方法,其特征在于,所述步骤S3.2中的第一滤波器是30Hz的四阶Butterworth滤波器,第二滤波器是三阶中值滤波器。6 . The performance evaluation method according to claim 5 , wherein the first filter in the step S3.2 is a 30Hz fourth-order Butterworth filter, and the second filter is a third-order median filter. 7 .7.根据权利要求5所述的绩效评估方法,其特征在于,所述步骤S3.3中的滑动时间窗长度为0.25s,重叠长度为0.075s。7 . The performance evaluation method according to claim 5 , wherein the sliding time window length in step S3.3 is 0.25s, and the overlapping length is 0.075s. 8 .8.根据权利要求5所述的绩效评估方法,其特征在于,所述步骤S3.4中的滑动时间窗长度为0.3s,重叠长度为0.1s。8 . The performance evaluation method according to claim 5 , wherein the sliding time window in step S3.4 has a length of 0.3s and an overlap length of 0.1s. 9 .9.根据权利要求5所述的绩效评估方法,其特征在于,所述步骤S3.5中的描述统计量包括:均值、方差、中位数、众数、峰度以及偏度。9 . The performance evaluation method according to claim 5 , wherein the descriptive statistics in the step S3.5 include: mean, variance, median, mode, kurtosis and skewness. 10 .10.用于权利要求1-3任一项所述的一种基于可穿戴设备的手工装配任务绩效识别系统中的网络模型的训练方法,包括以下步骤:10. be used for the training method of the network model in a kind of manual assembly task performance recognition system based on wearable device described in any one of claim 1-3, comprises the following steps:步骤S01,使用步骤S1中操作者信息和可穿戴设备采集的数据构造数据集ASSEMBLY;Step S01, using the operator information in step S1 and the data collected by the wearable device to construct a data set ASSEMBLY;步骤S02,对数据集ASSEMBLY的所有数据进行部分标注形成人工判定绩效的数据集ASSEMBLY_Output;同时形成可穿戴设备采集生理信号特征的数据集ASSEMBLY_Input;Step S02: Partially mark all the data of the data set ASSEMBLY to form a data set ASSEMBLY_Output for manual performance judgment; at the same time, form a data set ASSEMBLY_Input for the wearable device to collect physiological signal features;步骤S03,将数据集ASSEMBLY中的数据随机拆分,将其中70%组成训练数据集Train_Set,另外的30%组成测试数据集Test_Set;Step S03, randomly splitting the data in the data set ASSEMBLY, 70% of which form a training data set Train_Set, and the other 30% form a test data set Test_Set;步骤S04,使用两种机器学习方法GBDT、LDA和一种RNN深度学习的网络模型通过训练数据集Train_Set训练,得到模型GBDT_trained、LDA_trained、RNN_trained;Step S04, use two kinds of machine learning methods GBDT, LDA and a kind of RNN deep learning network model to train through the training data set Train_Set, obtain the model GBDT_trained, LDA_trained, RNN_trained;步骤S05,使用训练数据集Train_Set中的ASSEMBLY_Input输入和GBDT_trained、LDA_trained、RNN_trained模型进行分类后得到绩效评估数据输出集ASSEMBLY_Trained_Output;Step S05, using the ASSEMBLY_Input input in the training data set Train_Set and the GBDT_trained, LDA_trained, and RNN_trained models for classification to obtain the performance evaluation data output set ASSEMBLY_Trained_Output;步骤S06,使用绩效评估数据输出集ASSEMBLY_Trained_Output和训练数据集Train_Set中的ASSEMBLY_Output的误差对GBDT_trained、LDA_trained、RNN_trained进行增强,得到增强后的机器学习模型GBDT_improved、LDA_improved、RNN_improved;Step S06, using the performance evaluation data output set ASSEMBLY_Trained_Output and the error of ASSEMBLY_Output in the training data set Train_Set to enhance GBDT_trained, LDA_trained, RNN_trained, and obtain the enhanced machine learning model GBDT_improved, LDA_improved, RNN_improved;步骤S07,使用测试数据集Test_Set测试所述增强后的机器学习模型GBDT_improved、LDA_improved、RNN_improved的性能,若测试结果合格则结束训练;若不合格,则调整三种网络模型的参数,重新从步骤S03执行。Step S07, use the test data set Test_Set to test the performance of the enhanced machine learning models GBDT_improved, LDA_improved, RNN_improved, if the test result is qualified then end the training; if not qualified, then adjust the parameters of the three network models, from step S03 again implement.
CN202110192232.1A2021-02-192021-02-19 Wearable device-based manual assembly task performance recognition system and methodPendingCN112836760A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110192232.1ACN112836760A (en)2021-02-192021-02-19 Wearable device-based manual assembly task performance recognition system and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110192232.1ACN112836760A (en)2021-02-192021-02-19 Wearable device-based manual assembly task performance recognition system and method

Publications (1)

Publication NumberPublication Date
CN112836760Atrue CN112836760A (en)2021-05-25

Family

ID=75933882

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110192232.1APendingCN112836760A (en)2021-02-192021-02-19 Wearable device-based manual assembly task performance recognition system and method

Country Status (1)

CountryLink
CN (1)CN112836760A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115564078A (en)*2022-09-282023-01-03电子科技大学长三角研究院(湖州)Short-time-domain prediction method, system, equipment and terminal for parking space occupancy rate of parking lot

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170245806A1 (en)*2014-03-172017-08-31One Million Metrics Corp.System and method for monitoring safety and productivity of physical tasks
CN109009028A (en)*2018-08-312018-12-18江苏盖睿健康科技有限公司A kind of wearable device reflecting fatigue level of human body
CN110210366A (en)*2019-07-052019-09-06青岛理工大学Assembling and screwing process sample acquisition system, deep learning network and monitoring system
CN110448281A (en)*2019-07-292019-11-15南京理工大学A kind of wearable work fatigue detection system based on multisensor
US20210012902A1 (en)*2019-02-182021-01-14University Of Notre Dame Du LacRepresentation learning for wearable-sensor time series data
CN112256123A (en)*2020-09-252021-01-22北京师范大学Brain load-based control work efficiency analysis method, equipment and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170245806A1 (en)*2014-03-172017-08-31One Million Metrics Corp.System and method for monitoring safety and productivity of physical tasks
CN109009028A (en)*2018-08-312018-12-18江苏盖睿健康科技有限公司A kind of wearable device reflecting fatigue level of human body
US20210012902A1 (en)*2019-02-182021-01-14University Of Notre Dame Du LacRepresentation learning for wearable-sensor time series data
CN110210366A (en)*2019-07-052019-09-06青岛理工大学Assembling and screwing process sample acquisition system, deep learning network and monitoring system
CN110448281A (en)*2019-07-292019-11-15南京理工大学A kind of wearable work fatigue detection system based on multisensor
CN112256123A (en)*2020-09-252021-01-22北京师范大学Brain load-based control work efficiency analysis method, equipment and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115564078A (en)*2022-09-282023-01-03电子科技大学长三角研究院(湖州)Short-time-domain prediction method, system, equipment and terminal for parking space occupancy rate of parking lot

Similar Documents

PublicationPublication DateTitle
Mekruksavanich et al.Exercise activity recognition with surface electromyography sensor using machine learning approach
Ghasemzadeh et al.A body sensor network with electromyogram and inertial sensors: Multimodal interpretation of muscular activities
Zeng et al.Fatigue-sensitivity comparison of sEMG and A-mode ultrasound based hand gesture recognition
Ghasemzadeh et al.Structural action recognition in body sensor networks: Distributed classification based on string matching
CN104274191B (en)A kind of Psychological Evaluation method and system thereof
CN116507276A (en) Method and apparatus for machine learning to analyze musculoskeletal rehabilitation from images
Sahyoun et al.ParkNosis: Diagnosing Parkinson's disease using mobile phones
CN107007263A (en)The sleep quality assessment method and system of a kind of generalization
CN118213039A (en)Rehabilitation training data processing method and system based on deep reinforcement learning
CN118553425B (en) A method and system for constructing a dynamic prediction model for medical health
CN108305680A (en)Intelligent parkinsonism aided diagnosis method based on multi-element biologic feature and device
CN113974612A (en) A method and system for automatic assessment of upper limb motor function in stroke patients
CN116458872B (en)Method and system for analyzing respiratory data
Vijayvargiya et al.PC-GNN: Pearson correlation-based graph neural network for recognition of human lower limb activity using sEMG signal
CN107518896A (en)A kind of myoelectricity armlet wearing position Forecasting Methodology and system
CN115758097A (en)Method, system and storage medium for establishing multi-mode human-induced intelligent state recognition model and monitoring real-time state
CN115620204B (en) A quantitative assessment system for infant brain development based on a fusion model
CN112836760A (en) Wearable device-based manual assembly task performance recognition system and method
Subramanian et al.Using photoplethysmography for simple hand gesture recognition
CN105303771A (en)Fatigue judging system and method
CN115844415A (en)Method and system for evaluating motion stability based on electrocardiogram data
CN105046429B (en)User's thinking workload assessment method in interactive process based on mobile phone sensor
CN119498856A (en) An automated assessment system and method for arm movement in patients with cerebral palsy with dystonia
CN109961090B (en)Behavior classification method and system based on intelligent wearable device
CN116869516A (en) A comprehensive motion assessment method and system based on multi-source heterogeneous data

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20210525

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp