Movatterモバイル変換


[0]ホーム

URL:


CN108564100A - The method of mobile terminal and its generation classification of motion model, storage device - Google Patents

The method of mobile terminal and its generation classification of motion model, storage device
Download PDF

Info

Publication number
CN108564100A
CN108564100ACN201711337023.1ACN201711337023ACN108564100ACN 108564100 ACN108564100 ACN 108564100ACN 201711337023 ACN201711337023 ACN 201711337023ACN 108564100 ACN108564100 ACN 108564100A
Authority
CN
China
Prior art keywords
action
data
value
disaggregated model
action data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711337023.1A
Other languages
Chinese (zh)
Inventor
陈冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co LtdfiledCriticalHuizhou TCL Mobile Communication Co Ltd
Priority to CN201711337023.1ApriorityCriticalpatent/CN108564100A/en
Publication of CN108564100ApublicationCriticalpatent/CN108564100A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

This application discloses a kind of method generating classification of motion model, this method includes:The first disaggregated model of acquisition for mobile terminal;Mobile terminal detect and record user predetermined period action data;Temporal signatures value and frequency domain character value are obtained according to action data;The second disaggregated model is obtained according to temporal signatures value, frequency domain character value and the first disaggregated model;Wherein, temporal signatures value includes acceleration signature value and environmental characteristic value.Disclosed herein as well is a kind of mobile terminals and a kind of storage device.By the above-mentioned means, the application can make the classification of motion model of generation more accurate when classifying to the action of user.

Description

The method of mobile terminal and its generation classification of motion model, storage device
Technical field
This application involves electronic device fields, more particularly to the side of a kind of mobile terminal and its generation classification of motion modelMethod, storage device.
Background technology
Optimize with the continuous upgrading of mobile Internet and Intelligent hardware, people’s lives idea and demand are also constantly sent outChanging increasingly pays close attention to own health, also increasingly pays close attention to daily motion conditions.
The classification of motion method of human body is that the action data recorded by terminal is matched with classification of motion model at present,To classify to the action of user.And general classification of motion model is only the combination of acceleration rate threshold, and by thisClassification of motion model can not accurately classify to the action of user when classifying to the action of user.
Invention content
The application mainly solving the technical problems that provide a kind of method of mobile terminal and its generation classification of motion model,Storage device can make the classification of motion model of generation more accurate when classifying to the action of user.
In order to solve the above technical problems, the technical solution that the application uses is:A kind of generation classification of motion mould is providedThe method of type, this method include:The first disaggregated model of acquisition for mobile terminal;Mobile terminal detects and records user in predetermined periodAction data;Temporal signatures value and frequency domain character value are obtained according to action data;According to temporal signatures value, frequency domain character value withAnd first disaggregated model obtain the second disaggregated model;Wherein, temporal signatures value includes acceleration signature value and environmental characteristic value.
In order to solve the above technical problems, another technical solution that the application uses is:A kind of classification of motion method, the partyMethod includes:The first disaggregated model of acquisition for mobile terminal;Mobile terminal detects and records the action data of user, the action data packetInclude the second action data in the first action data and the second time period in first time period;According to the first action data andOne disaggregated model obtains the second disaggregated model;Second action data and the second disaggregated model are matched with to the second action numberAccording to classification.
In order to solve the above technical problems, another technical solution that the application uses is:A kind of mobile terminal is provided, the shiftingDynamic terminal includes processor and the memory that is connect with the processor, and for storing computer program, processor is used for memoryCall computer program to execute the above method.
In order to solve the above technical problems, another technical solution that the application uses is:A kind of storage device is provided, this is depositedStorage device can store computer program, which can be performed to realize the above method.
The advantageous effect of the application is:The case where being different from the prior art, the classification mould of the application acquisition for mobile terminal firstType;Mobile terminal detect and record user predetermined period action data;Temporal signatures value and frequency are obtained according to action dataCharacteristic of field value;The second disaggregated model is obtained according to temporal signatures value, frequency domain character value and the first disaggregated model;Wherein, time domainCharacteristic value includes acceleration signature value and environmental characteristic value.By the above-mentioned means, the method that the application generates classification of motion modelDue to obtaining temporal signatures value and frequency domain character value according to action data, and utilize temporal signatures value and frequency domain character value and firstDisaggregated model obtain the second disaggregated model, enable the terminals to synthetic user movement when time domain data and frequency domain data to userAction classify, to keep the classification of motion model of generation more accurate when classifying to the action of user.
Description of the drawings
Fig. 1 is the flow diagram for the method that the embodiment of the present application mobile terminal generates classification of motion model;
Fig. 2 is the flow diagram of the embodiment of the present application mobile terminal classification of motion method;
Fig. 3 is the hardware architecture diagram of the embodiment of the present application mobile terminal;
Fig. 4 is the schematic diagram of the embodiment of the present application storage device.
Specific implementation mode
It is understandable to enable the above objects, features, and advantages of the application to become apparent, below in conjunction with the accompanying drawings, to the applicationSpecific implementation mode be described in detail.It is understood that specific embodiment described herein is only used for explaining this ShenPlease, rather than the restriction to the application.It also should be noted that illustrating only for ease of description, in attached drawing and the applicationRelevant part rather than entire infrastructure.Based on the embodiment in the application, those of ordinary skill in the art are not making creationProperty labour under the premise of all other embodiment for being obtained, shall fall in the protection scope of this application.
Term " first ", " second " in the application etc. be for distinguishing different objects, rather than it is specific suitable for describingSequence.In addition, term " comprising " and " having " and their any deformations, it is intended that cover and non-exclusive include.Such as comprisingThe step of process of series of steps or unit, method, system, product or equipment are not limited to list or unit, andIt further includes the steps that optionally not listing or unit to be, or further includes optionally for these processes, method, product or equipmentIntrinsic other steps or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodimentsIt is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identicalEmbodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly andImplicitly understand, embodiment described herein can be combined with other embodiments.
Referring to Fig. 1, Fig. 1 is the flow signal for the method that the embodiment of the present application mobile terminal generates classification of motion modelFigure.
In the present embodiment, the method which generates classification of motion model may comprise steps of:
Step S11:The first disaggregated model of acquisition for mobile terminal.
Optionally, which can be smart mobile phone, wearable smart machine or tablet computer, wherein canWearable intelligent equipment can be smartwatch or intelligent glasses etc., and in other embodiments, mobile terminal may be itIts moveable portable intelligent terminal device, the application are not restricted this.
Optionally, the first disaggregated model may include the disaggregated model of multiple actions, for example, walking, running, cycling, tripIt the disaggregated models of actions such as swims, go upstairs, going downstairs, jumping.In the present embodiment, the first disaggregated model include it is static action,The disaggregated model of action, running action, cycling action and other five kinds of actions of action on foot, in other embodiments, firstDisaggregated model can also include the disaggregated model of other actions, and the application is not also restricted this.
The acquisition of first disaggregated model can be there are many mode.For example, in one embodiment, user sends per a kind of actionThe characteristic and model structure of model, wherein characteristic includes that temporal signatures data, frequency domain character data and environment are specialData etc. are levied, processor obtains these characteristics, and then gets the first disaggregated model, and optionally, processor can also incite somebody to actionFirst disaggregated model is stored into memory, to call the first disaggregated model in memory when carrying out the classification of motion.AnotherIn one embodiment, user can send the action data in predetermined period per a kind of action to mobile terminal, for example, user sendsThe action that each in static action, action of walking, running action, cycling action and other actions within one hour actsTo processor, processor receives these action datas and stores that data in memory data, and processor utilizes theseAction data establishes the model per a kind of action, to get the first disaggregated model.
Step S12:Mobile terminal detect and record user predetermined period action data.
In the present embodiment, mobile terminal is carried on the body of user, is moved with the movement of user.For example, when movementWhen terminal is smartwatch, mobile terminal can be worn in wrist by user, alternatively, when mobile terminal is smart mobile phone, be usedMobile terminal can be placed in pocket or packet by family, in this way, user is during exercise, drive mobile terminal to generate movement, in turnSo that mobile terminal is detected and the action data of user is recorded.
Specifically, detecting and the action data for recording user may include:The processor of mobile terminal controls sensorThe action data of user is detected, processor reads the action data of the user of sensor detection and stores the action data of userIn memory.After the action data for reading the user of sensor detection, the action data of user can be cached easy to power downThe property lost memory can also be to store to non-power-failure volatile memory, and the embodiment of the present application does not limit this.Processor canTo read the data of a period of time inner sensor detection of caching, such as read the action number of the user cached in a period of timeAccording to for example, in the present embodiment, processor reads the action data for caching user on a sensor in one minute.
In the present embodiment, sensor may include environmental sensor, acceleration transducer and gravity sensor etc.,In, environmental sensor can also include light sensor and range sensor etc., and acceleration transducer may include that three axis accelerateSpend sensor or linear 3-axis acceleration sensor.Wherein, when acceleration transducer is 3-axis acceleration sensor, three axis addVelocity sensor is used to detect and record the acceleration information of three axis (X-axis, Y-axis, Z axis) of user during exercise, and addsSpeed data can be influenced by mobile terminal acceleration of gravity;It is linear 3-axis acceleration sensor in acceleration transducerWhen, 3-axis acceleration sensor is also used for detecting and recording the acceleration of three axis (X-axis, Y-axis, Z axis) of user during exerciseData, but acceleration information can or can not be influenced by mobile terminal acceleration of gravity;Gravity sensor is for detecting and rememberingEmploy the gravity 3-axis acceleration data of family during exercise;Light sensor is mobile whole during exercise for detecting and recording userThe environment bright angle value of environment where end;And range sensor for detect and record user during exercise mobile terminal withThe distance value of surrounding objects.
It, can be with it is to be appreciated that when the sensor of mobile terminal includes 3-axis acceleration sensor and gravity sensorThere is no linear 3-axis acceleration sensor, similarly, when the sensor of mobile terminal includes linear 3-axis acceleration sensor,Can there is no 3-axis acceleration sensor and gravity sensor.
In other embodiments, sensor may include not only the sensor, can also include angular transducer, gyroAt least one of sensors such as instrument, height sensor and position sensor.Wherein, angular transducer is for detecting and recordingThe angle-data of mobile terminal;Gyroscope is used to detect and record the angle and bearing data of mobile terminal;Height sensor is usedIn detection altitude data;And position sensor is used for detection position data.
Optionally, in one embodiment, detection user can be the processing of mobile terminal in the action data of predetermined periodDevice control sensor with predetermined sampling frequency detection user predetermined period action data.The predetermined sampling frequency can be10Hz is to any value between 50Hz.For example, the predetermined sampling frequency can be 10Hz, 30Hz or 50Hz, in the present embodimentIn, which is 30Hz.
Optionally, in other embodiments, detection user may include in the action data of predetermined period:Processor according toThe action data of user determines sample frequency;Processor controls sensor with sample frequency detection user in the dynamic of predetermined periodMake data.
For example, the user action data acquisition user that processor is detected according to first time period sensor is in first time periodInterior action data, processor analyze the type of action of first time period user according to the user action data of first time period,Then, processor obtains the sample frequency in predetermined period according to the type of action of first time period user, wherein at the first timeSection can be a period before predetermined period.
By the above-mentioned means, processor is acute in the movement of the user for the type of action characterization for determining first time period userWhen strong, control sensor detects the action data of user in predetermined period with larger sample frequency;Processor is determiningWhen the movement of the user of the type of action characterization of one period user is gentle, control sensor is in predetermined period with relatively low sampling frequencyRate detects the action data of user.
In other embodiments, processor can obtain current time, then according to current time from prestoring the timeSample frequency corresponding with the acquisition of sample frequency mapping table, then processor is according to the sample frequency of acquisition control sensorThe action data of user is detected with the sample frequency.
For example, pre-stored time and sample frequency mapping table are processor according to the different time detected in the pastThe different sample frequencys that are set of action data of the user of section, according to this one-to-one relationship, generated time with adoptSample frequency mapping table is simultaneously stored, wherein more violent, the sampling set of action for the user that action data is characterizedFrequency is faster.
In the present embodiment, predetermined period can be chosen according to actual conditions, for example, predetermined period can choose 5Any duration between minute to 5 hours, for example, predetermined period can be 5 minutes, 1 hour or 5 hours.The application is to thisIt is not construed as limiting.
Step S13:Temporal signatures value and frequency domain character value are obtained according to action data.
Wherein, temporal signatures value includes acceleration signature value and environmental characteristic value.
Optionally, before step S13, the method that mobile terminal generates classification of motion model can also include that will act numberAccording to adding window, to be divided into more parts of action datas.Wherein, the data volume of every part of action data of more parts of action datas is identical, and wantonly twoThere is 20%~80% identical data in the adjacent action data of part.In the present embodiment, every part of action of more parts of action datasThe data volume of data is identical, and has 50% identical data in wantonly two parts of adjacent action datas.
Action data is divided into more parts of action datas by above-mentioned, it not only can be by the action number of a long periodAccording to the action data for being divided into several short periods, convenient for being analyzed action data and being calculated, and since adjacent is appointedThe identical data for having 50% in two parts of action datas, can more efficiently use the action data being recorded, to make pointClass is more accurate.
Optionally, by action data adding window, after being divided into more parts of action datas, mobile terminal generates classification of motion mouldThe method of type can also include being filtered to every part of action data in more parts of action datas.
Optionally, it may include 3-axis acceleration data in every part of action data to be filtered to every part of action dataIt is filtered.
Optionally, it can also includes environment bright degrees of data in every part of action data to be filtered to every part of actionAnd/or the distance value of mobile terminal and surrounding objects is filtered.
Optionally, it is filtered selected filter to every part of action data can be the same or different, filterHigh-pass filter, low-pass filter, bandpass filter or bandstop filter, the application can also be selected without limitation.In the present embodiment, it is low-pass Gaussian filter to be filtered selected filter to every part of action data, and is utilizedThe method of low pass gaussian filtering is filtered every part of action data.Specifically, action data is carried out low pass gaussian filteringIt is exactly that the action data and a Gaussian kernel are subjected to convolution, i.e.,:
Iσ=I*Gσ (I)
Wherein, GσThe one-dimensional gaussian kernel function for being σ for standard deviation, wherein the formula of one-dimensional gaussian kernel function is as follows:
In the present embodiment, low pass is carried out to the every part of action data for detecting and being recorded using formula (1) and formula (2)Gaussian filtering.
In other embodiments, to every part of action data be filtered can according to user action data carry out it is adaptiveIt should filter.For example, changing the phase relation of filter type and/or filter according to the action data of the user in the first periodNumber, so as to carry out adaptive-filtering according to the action data of user, the application is not restricted the method for adaptive-filtering.
It optionally, can be to the collected gravity of gravity sensor when mobile terminal includes gravity accelerometer3-axis acceleration data are filtered, and specific filtering mode please refers to described above, and details are not described herein again.In this realityIt applies in example, the mode that be filtered to the 3-axis acceleration data that 3-axis acceleration sensor detects and to gravity sensitiveThe mode that the gravity 3-axis acceleration data that device detects are filtered is identical, for example, the two is all to utilize low pass GaussFilter is filtered every part of action data in more parts of action datas.
Optionally, after every part of action data is filtered in more parts of action datas, mobile terminal generation action pointThe method of class model can also include modifying the 3-axis acceleration data for detecting and being recorded to remove acceleration of gravityInfluence, obtain linear 3-axis acceleration data.
Specifically, when acceleration transducer in the terminal is 3-axis acceleration sensor, since three axis accelerateDegree sensor detects and the 3-axis acceleration data being recorded can be influenced by gravity 3-axis acceleration, it is therefore desirable to detectionAnd the 3-axis acceleration data being recorded are modified to remove the influence of acceleration of gravity.Concrete modification mode can be filtering3-axis acceleration data, filtered gravity 3-axis acceleration data and difference equation afterwards obtains no acceleration of gravity numberAccording to the linear 3-axis acceleration data of influence.In the present embodiment, the expression formula of difference equation is:
Y (n)=A*y (n-1)+(1-A) * x (n) (3)
Wherein, y (n-1) represents filtered 3-axis acceleration data, and x (n) represents filtered gravity 3-axis accelerationData, y (n) represent the linear 3-axis acceleration data that no gravity influences, and A values can be set according to actual conditions, this ShenPlease this is not restricted.
Optionally, after modifying to 3-axis acceleration data, the method that mobile terminal generates classification of motion model may be used alsoTo include carrying out vector sum addition to each 3-axis acceleration data of filtered every part of action data.Specifically, filteringAnd each linear 3-axis acceleration data modified are respectivelyWhen, the vector sum of the 3-axis acceleration data isBy above-mentionedMode, for example, when being placed on pocket, can weaken mobile terminal with respect to pocket when mobile terminal is unlockedThe influence for generating movement and classification results being generated.
In the present embodiment, environmental characteristic value includes:Environment bright angle value every part of action data in more parts of action datasMean value and/or the distance value of mobile terminal and surrounding objects every part of action data in more parts of action datas mean value.
It is to be appreciated that in the present embodiment, environment bright angle value in more parts of action datas every part of action data it is equalValue can be:The mean value of the environmental data of every part of action data after filtered.In other embodiments, environment bright angle value existsThe mean value of every part of action data can be in more parts of action datas:Without filtered every part of action data environmental data it is equalValue.Similarly, in the present embodiment, the distance value of mobile terminal and surrounding objects every part of action data in more parts of action datasMean value can be:The mean value of the mobile terminal of every part of action data after filtered and the distance value of surrounding objects.OtherIn embodiment, the mean value of every part of action data can also in more parts of action datas for the distance value of mobile terminal and surrounding objectsIt is:Without the mean value of the distance value of the mobile terminal and surrounding objects of filtered every part of action data.
In other embodiments, environmental characteristic value can also include:Humidity data every part of action in more parts of action datasThe mean value of data, the mean value of temperature data every part of action data in more parts of action datas.
In another embodiment, environmental characteristic value can also include:Environment bright angle value is every part in more parts of action datasThe side of the variance and/or mobile terminal of action data and distance value every part of action data in more parts of action datas of surrounding objectsDifference.
In another embodiment, environmental characteristic value can also include:Environment bright angle value is every part in more parts of action datasThe standard deviation and/or mobile terminal of action data and the distance value of surrounding objects every part of action data in more parts of action datasStandard deviation.
Acceleration signature value can be that every part of action data is filtered in more parts of action datas, change and vector sum is addedAcceleration information.
Optionally, frequency domain character value may include:The 3-axis acceleration data of filtered every part of action data are carried outFast Fourier Transform (FFT), and the Nth power of acquisition preceding 2 maintains number, wherein N is natural number.Optionally, filtered every part is actedIt can be to accelerate to three axis of filtered every part of action data that the 3-axis acceleration data of data, which carry out Fast Fourier Transform (FFT),Each axle acceleration data of degrees of data carries out Fast Fourier Transform (FFT) respectively, and, it obtains each axle acceleration data and passes throughPreceding 2 Nth power after Fast Fourier Transform (FFT) maintains number, for example, obtaining preceding 4 maintains number, obtains preceding 8 and maintains number, obtain preceding 16It maintains number or obtains preceding 32 and maintain number etc., the application is not construed as limiting this.
In the present embodiment, frequency domain character value may include:To the 3-axis acceleration number of filtered every part of action dataAccording to each axle acceleration data carry out Fast Fourier Transform (FFT) respectively, and obtain preceding 16 and maintain number.
Wherein, the Java code for obtaining the coefficient of Fast Fourier Transform (FFT) is as follows:
By the above-mentioned means, each axle acceleration data for getting the 3-axis acceleration data of every part of action data carries outThe coefficient value of Fast Fourier Transform (FFT), and take preceding 16 data as frequency domain character value.
Step S14:The second disaggregated model is obtained according to temporal signatures value, frequency domain character value and the first disaggregated model.
Optionally, obtaining the second disaggregated model according to temporal signatures value, frequency domain character value and the first disaggregated model can be withIncluding:Data sample is obtained according to temporal signatures value and frequency domain character value, and obtains grader;According to data sample, graderAnd first disaggregated model obtain the second disaggregated model.
Optionally, data sample includes training sample and test sample.
Specifically, data sample is divided at least two parts of samples, using any part of sample at least two parts of samples asTest sample at least will remove test sample as training sample by two parts of samples.
In the present embodiment, data sample is divided into three parts of samples, wherein wantonly two parts of samples are as training sample, it is remainingA sample is as test sample.
Optionally, it can be that data sample is equally divided into three parts data sample to be divided into three parts, specifically, by more partsTemporal signatures value and frequency domain character value in action data are divided into three parts.
Optionally, grader can there are many selections, for example, grader can select Bayes classifier, neural networkOne or more combinations of grader, Logistic graders, support vector machine classifier, the application are not restricted this.
In the present embodiment, grader is support vector machine classifier.
After data sample is divided into training sample and test sample, support vector machine classifier and training sample are utilizedFirst disaggregated model is updated.
Since the first disaggregated model in the application is static action, action of walking, running action, cycling acts and itIt acts the disaggregated model of this five kinds actions, and support vector machine classifier is proposed to solve two class classification problems, therefore,The first disaggregated model of the application creates 5 support vector machine classifiers, respectively by the way of multiple classifiers combinationsFor:The models that static action model and four kinds of actions for removing static action combine, action model and removing, which are walked, on foot actsThe models that combine of four kinds of actions, running action model and the models combined except four kinds of the action actions that go jogging, cycling actionThe models that model is combined with four kinds of actions for removing cycling action, and, other action models with remove four kinds of other actionsAct the model combined.
It is to be appreciated that in other embodiments, being selected according to the difference of the action of classification, other quantity can be createdSupport vector machine classifier, concrete implementation method is identical with the creation method of the application, and the application is not restricted this.
In the present embodiment, it is updated using grader and first the first disaggregated model of training sample pair and may include:5 support vector machine classifiers in the characteristic value and disaggregated model of every part of action data in training sample are obtained, according to training5 support vector machine classifiers in sample in the characteristic value and disaggregated model of every part of action data obtain in training sample every partType of action representated by action data, according to every part of action number in more parts of action datas in the first disaggregated model and training sampleThe first disaggregated model is updated according to representative type of action, and then obtains the second disaggregated model.
Specifically, according to 5 supporting vectors in the characteristic value and disaggregated model of every part of action data in training sampleOne embodiment that machine grader obtains the type of action representated by every part of action data in training sample can be:Processor canFirst to detect the action of the running of user, i.e., the corresponding vector of running action is positive collection in support vector machine classifier, is removedThe vector that four kinds of actions of running action are corresponding is negative collection, walks in processor detection user, cycles, static and other dynamicWhen making, method is similar, and details are not described herein again.It is understood that the sequence of detection user action can change.Optionally, existIn training process, training is tied in order to avoid a kind of data volume of action data and the data volume of four kinds of action datas imbalanceFruit has an impact, and can choose the data volume of a quarter of above-mentioned negative concentration as negative collection.Trained sample is got in processorIt, can be according to every part of action number in more parts of action datas in training sample after type of action in this representated by every part of action dataIt is trained according to representative the first disaggregated model of type of action pair, and then updates the first disaggregated model, obtain the second classification mouldType.
In the present embodiment, in order to make full use of the user data for detecting and being recorded, by the action number in predetermined periodAccording to being divided into three parts, respectively L, M and N.It, can be according to the first disaggregated model and training sample when updating the first disaggregated modelIn the first disaggregated model of type of action pair in more parts of action datas representated by every part of action data updated three times, and every timeExamine whether updated model meets preset requirement by test sample after update.
For example, the L and M of action data is used to be trained as the first disaggregated model of training sample pair first, test is utilizedSample N tests the first disaggregated model after the L of action data and M training.And meet preset requirement in test resultWhen, use the L and N of action data to continue to train as the first disaggregated model of training sample pair, using test sample M to through dynamicMake the first disaggregated model after L and the N training of data to be tested, and when test result meets preset requirement, with action numberAccording to M and N be further continued for being trained as the first disaggregated model of training sample pair, obtain updated first disaggregated model, profitThe first disaggregated model after the M of action data and N training is tested with test sample L, and is met in test result pre-If it is required that when, using updated first disaggregated model as the second disaggregated model.
Optionally, in one embodiment, it can be to classify using updated first that test result, which meets preset requirement,The first test sample of category of model, if update before the first disaggregated model classify the first training sample classification results and update afterThe first disaggregated model classify the first test sample classification results meet scheduled error, it may be considered that test result meetsIt is required that;In another embodiment, it can be the characteristic value in updated first disaggregated model that test result, which meets preset requirement,Meet scheduled functional relation with the characteristic value in test sample, it may be considered that test result meets the requirements, the application is to thisIt is not restricted.
Optionally, in above-mentioned implementation process, if the first disaggregated model of any test sample pair or trained sample instructionThe first disaggregated model after white silk tested after test result when not meeting preset value, can data sample be divided into againTwo training samples and the second test sample, and be updated using second the first disaggregated model of training sample pair, until test is tiedFruit meets the requirements, wherein the data that the data that the second training sample includes and aforementioned training sample include are different;Second test specimensOriginally the data that the data for including and aforementioned test sample include are different.
In other embodiments, the first action data can also be divided into other parts, and training sample and test sample can alsoIt is divided according to actual conditions, for example, can the first action data be divided into four parts, training sample is used as by arbitrary three parts,Remaining a as test sample, alternatively, being used as training sample by arbitrary two parts, remaining two parts are used as test sample.ThisApplication is not restricted this.
Referring to Fig. 2, Fig. 2 is the flow diagram of the embodiment of the present application classification of motion method.
In the present embodiment, the classification of motion method of the mobile terminal may comprise steps of:
Step S21:The first disaggregated model of acquisition for mobile terminal.
Wherein, the specific execution method of the first disaggregated model of acquisition for mobile terminal is similar with the method for abovementioned steps S11, thisPlace repeats no more.
Step S22:Mobile terminal detects and records the action data of user, and action data includes in first time periodThe second action data in one action data and second time period.
The specific execution method for detecting and recording the action data of user is similar with the method for abovementioned steps S12, herein notIt repeats again.
Step S23:The second disaggregated model is obtained according to the first action data and the first disaggregated model.
The method that the specific method of the second disaggregated model sees above-mentioned steps S13 and step S14 is obtained, it is no longer superfluous hereinIt states.
Step S24:Second action data and the second disaggregated model are matched to classify to the second action data.
Specifically, matching the second action data and the second disaggregated model with can be with to the classification of the second action dataIncluding:By the second action data in the second disaggregated model temporal signatures value and frequency domain character value match, to will useThe action at family is divided into five kinds of static action, action of walking, running action, cycling action and other actions actions.
Optionally, after step S24, the exercise data with the user for detecting and being recorded increases, can also basisSecond action data and the second disaggregated model obtain third disaggregated model, by third action data and the progress of third disaggregated modelIt is equipped with and classifies to third action data ...
By the above-mentioned means, disaggregated model increases with the action data for the user for detecting and being recorded, continuous basisThe action data of user is updated disaggregated model, and disaggregated model is made to be more in line with the motor habit of user, to improveThe accuracy of sorting technique.
Referring to Fig. 3, Fig. 3 is the hardware architecture diagram of the embodiment of the present application mobile terminal.In the present embodiment, mobileTerminal 20 includes processor 21, bus 22, the memory 23 being connect by bus 22 with processor 21 and sensor 24, memory23 for storing computer program, and processor 21 is for calling computer program to execute the mobile end of above-mentioned any one embodimentEnd generates the method and classification of motion method of classification of motion model.
Sensor 24 may include environmental sensor, acceleration transducer and gravity sensor etc., wherein environmental sensorFurther include light sensor and range sensor etc., acceleration transducer can be that 3-axis acceleration sensor or linear three axis addVelocity sensor, sensor 24 can also include at least one of gyroscope, height sensor and position sensor, sensingThe explanation of 24 concrete function of device refers to the description in any one embodiment, and details are not described herein again.
Referring to Fig. 4, Fig. 4 is the schematic diagram of the embodiment of the present application storage device.In the present embodiment, storage device 30 is depositedComputer program is contained, which can be performed to realize the mobile terminal generation action of above-mentioned any one embodimentThe method of disaggregated model.
Optionally, storage device 30 can be USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory),Random access memory (RAM, Random Access Memory), magnetic disc, CD or server etc. are various can to store journeyThe medium of sequence code.
Optionally, which can also be the memory 23 in above-described embodiment.
The case where being different from the prior art, the first disaggregated model of the application acquisition for mobile terminal;Mobile terminal detects and remembersEmploy family predetermined period action data;Temporal signatures value and frequency domain character value are obtained according to action data;According to time domain spyValue indicative, frequency domain character value and the first disaggregated model obtain the second disaggregated model;Wherein, temporal signatures value includes acceleration signatureValue and environmental characteristic value.By the above-mentioned means, the application generates the method for classification of motion model due to being obtained according to action dataTemporal signatures value and frequency domain character value, and obtain the second classification using temporal signatures value and frequency domain character value and the first disaggregated modelModel, time domain data and frequency domain data when enabling the terminals to synthetic user movement classify to the action of user, toKeep the classification of motion model of generation more accurate when classifying to the action of user.
It these are only presently filed embodiment, be not intended to limit the scope of the claims of the application, it is every to utilize the applicationEquivalent structure or equivalent flow shift made by specification and accompanying drawing content is applied directly or indirectly in other relevant technologiesField includes similarly in the scope of patent protection of the application.

Claims (10)

CN201711337023.1A2017-12-122017-12-12The method of mobile terminal and its generation classification of motion model, storage devicePendingCN108564100A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201711337023.1ACN108564100A (en)2017-12-122017-12-12The method of mobile terminal and its generation classification of motion model, storage device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201711337023.1ACN108564100A (en)2017-12-122017-12-12The method of mobile terminal and its generation classification of motion model, storage device

Publications (1)

Publication NumberPublication Date
CN108564100Atrue CN108564100A (en)2018-09-21

Family

ID=63530276

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201711337023.1APendingCN108564100A (en)2017-12-122017-12-12The method of mobile terminal and its generation classification of motion model, storage device

Country Status (1)

CountryLink
CN (1)CN108564100A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110205359A1 (en)*2010-02-192011-08-25Panasonic CorporationVideo surveillance system
CN103076619A (en)*2012-12-272013-05-01山东大学System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man
CN105528613A (en)*2015-11-302016-04-27南京邮电大学Behavior identification method based on GPS speed and acceleration data of smart phone
CN105956558A (en)*2016-04-262016-09-21陶大鹏Human movement identification method based on three-axis acceleration sensor
CN106210269A (en)*2016-06-222016-12-07南京航空航天大学A kind of human action identification system and method based on smart mobile phone
CN106237604A (en)*2016-08-312016-12-21歌尔股份有限公司Wearable device and the method utilizing its monitoring kinestate
CN107103297A (en)*2017-04-202017-08-29武汉理工大学Gait identification method and system based on mobile phone acceleration sensor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110205359A1 (en)*2010-02-192011-08-25Panasonic CorporationVideo surveillance system
CN103076619A (en)*2012-12-272013-05-01山东大学System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man
CN105528613A (en)*2015-11-302016-04-27南京邮电大学Behavior identification method based on GPS speed and acceleration data of smart phone
CN105956558A (en)*2016-04-262016-09-21陶大鹏Human movement identification method based on three-axis acceleration sensor
CN106210269A (en)*2016-06-222016-12-07南京航空航天大学A kind of human action identification system and method based on smart mobile phone
CN106237604A (en)*2016-08-312016-12-21歌尔股份有限公司Wearable device and the method utilizing its monitoring kinestate
CN107103297A (en)*2017-04-202017-08-29武汉理工大学Gait identification method and system based on mobile phone acceleration sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐川龙: "基于三维加速度传感器的人体行为识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Similar Documents

PublicationPublication DateTitle
KR101690649B1 (en)Activity classification in a multi-axis activity monitor device
Otebolaku et al.User context recognition using smartphone sensors and classification models
JP7544379B2 (en) Information processing device, information processing method, trained model generation method and program
CN108703760A (en)Human motion gesture recognition system and method based on nine axle sensors
US10588517B2 (en)Method for generating a personalized classifier for human motion activities of a mobile or wearable device user with unsupervised learning
RU2606880C2 (en)Method, device and software for activity sensor data processing
CN106999748B (en)Systems, devices and methods relating to athletic data
EP2692156A1 (en)Methods, devices, and apparatuses for activity classification using temporal scaling of time-referenced features
CN108629170A (en)Personal identification method and corresponding device, mobile terminal
Jantawong et al.Enhancement of human complex activity recognition using wearable sensors data with inceptiontime network
CN107277222A (en)User behavior state judging method based on mobile phone built-in sensors
Ranakoti et al.Human fall detection system over IMU sensors using triaxial accelerometer
Lu et al.Mobile online activity recognition system based on smartphone sensors
Minh et al.Evaluation of smartphone and smartwatch accelerometer data in activity classification
CN108827290A (en)A kind of human motion state inverting device and method
CN108564100A (en)The method of mobile terminal and its generation classification of motion model, storage device
WO2014191803A1 (en)Acceleration-based step activity detection and classification on mobile devices
Othmen et al.A novel on-wrist fall detection system using Supervised Dictionary Learning technique
Bonomi et al.Non-intrusive and privacy preserving activity recognition system for infants exploiting smart toys
Hashim et al.Machine learning-based human activity recognition using neighbourhood component analysis
Alman et al.Pattern recognition of human activity based on smartphone data sensors using SVM multiclass
Han et al.An Autoencoder Framework for Few-Shot Human Activity Recognition with Sensor Data
Skoglund et al.Activity tracking using ear-level accelerometers
Sheishaa et al.A context-aware motion mode recognition system using embedded inertial sensors in portable smart devices
Hossen et al.Smartphone-Based Drivers Context Recognition

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20180921

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp