Movatterモバイル変換


[0]ホーム

URL:


US20030208289A1 - Method of recognition of human motion, vector sequences and speech - Google Patents

Method of recognition of human motion, vector sequences and speech
Download PDF

Info

Publication number
US20030208289A1
US20030208289A1US10/427,882US42788203AUS2003208289A1US 20030208289 A1US20030208289 A1US 20030208289A1US 42788203 AUS42788203 AUS 42788203AUS 2003208289 A1US2003208289 A1US 2003208289A1
Authority
US
United States
Prior art keywords
model
vectors
sequence
input
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/427,882
Other versions
US7366645B2 (en
Inventor
Jezekiel Ben-Arie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IndividualfiledCriticalIndividual
Priority to US10/427,882priorityCriticalpatent/US7366645B2/en
Publication of US20030208289A1publicationCriticalpatent/US20030208289A1/en
Application grantedgrantedCritical
Publication of US7366645B2publicationCriticalpatent/US7366645B2/en
Adjusted expirationlegal-statusCritical
Expired - Fee Relatedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method for recognition of an input human motion as being the most similar to one model human motion out of a collection of stored model human motions. In the preferred method, both the input and the model human motions are represented by vector sequences that are derived from samples of angular poses of body parts. The input and model motions are sampled at substantially different rates. A special optimization algorithm that employs sequencing constraints and dynamic programming, is used for finding the optimal input-model matching scores. When only partial body pose information is available, candidate matching vector pairs for the optimization are found by indexing into a set of hash tables, where each table pertains to a sub-set of body parts. The invention also includes methods for recognition of vector sequences and for speech recognition.

Description

Claims (45)

What is claimed is:
1. A method for recognizing an input human motion as most similar to a model human motion, which is a member of a stored collection of model human motions, comprising the steps of:
a. Creating a collection of model human motions by measuring and recording the model trajectories of body parts that pertain to human performances of such collection of model human motions;
b. sampling at a model-rate of sampling, one recorded model trajectories of body parts;
c. repeating step b each time with another recorded model trajectories of body parts, until all the recorded model trajectories of body parts included in the collection of model human motions, have been sampled;
d. representing each sample of the recorded model trajectories of body parts by a model vector mrj; wherein each component of such model vector is derived from a sample of a recorded model trajectory of one body part;
e. representing each member of the collection of model human motions by a model vectors sequence Mj=(m1j. . . mrj. . . mqj); wherein the subscript q denotes the total number of model vectors in the model vectors sequence Mj; wherein the subscript r denotes the location of the model vector mrjwithin the model vectors sequence Mj; wherein the subscript j denotes the serial number of the model vectors sequence Mjwithin the collection of model vectors sequences {Mj} that represents the corresponding collection of model human motions;
f. storing in a hash table, the entire collection of model vectors sequences {Mj} by storing each of the model vectors mrjthat belongs to the collection of model vectors sequences {Mj}, in a hash table bin whose address is the nearest to the model vector mrj;
g. Acquiring an input human motion by measuring and recording the input trajectories of body parts that pertain to a human performance of such input human motion;
h. sampling at an input-rate of sampling, the recorded input trajectories of body parts; wherein the input-rate of sampling is set to be substantially different from the model-rate of sampling;
i. representing each sample of the recorded input trajectories of body parts by an input vector tnk; wherein each component of such an input vector is derived from a sample of a recorded input trajectory of the same body part that pertains to the corresponding component of the model vector mrj;
j. representing the sampled input human motion by an input vectors sequence Tk=(t1k. . . tnk. . . tpk); wherein the subscript p denotes the total number of input vectors in the input vectors sequence; wherein the subscript n denotes the location of the input vector tnkwithin the input vectors sequence Tk; wherein the subscript k denotes the serial number of the input vectors sequence;
k. employing an optimal matching algorithm to find the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj};
l. repeating step k until all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} are found;
m. comparing all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} and finding one model vectors sequence Mothat has the highest optimal matching score; recognizing the input human motion as most similar to the model human motion that pertains to the model vectors sequence Mowith the highest optimal matching score;
2. The method ofclaim 1 wherein employing the optimal matching algorithm for finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. indexing into a hash table, each input vector tnkthat belongs to the input vectors sequence Tk, and retrieving all the model vectors mrjthat are near to each input vector tnk; using the model vectors retrieved to construct all the vector pairs (tnk, mrj) composed of vectors that are near to one another;
b. computing for each vector pair (tnk, mrj) retrieved in step a, a pair matching score that reflects the vector pair's matching quality;
c. removing from all the vector pairs retrieved all the vector pairs that have a pair matching score equal or below a predetermined threshold and retaining the remaining vector pairs and naming these remaining vector pairs as matching vector pairs;
d. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence; wherein the valid pair sequence is constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions: (I) c is not equal to n (II) if c>n, then r≦e; (III) if c<n then e≦r;
e. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
3. The method ofclaim 2 wherein the recorded trajectories of human body parts are equivalent to recorded poses of human body parts; wherein both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from samples of recorded poses of human body parts and from temporal derivatives of samples of recorded poses of human body parts.
4. The method ofclaim 3 wherein the samples of recorded poses of human body parts and the temporal derivatives of samples of recorded poses of human body parts, correspond to samples of recorded angular poses of body parts and to temporal derivatives of samples of recorded angular poses of human body parts.
5. The method ofclaim 4 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
6. The method ofclaim 5 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
7. The method ofclaim 4 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and each model vectors sequence Mj, which is a member of the collection of model vectors sequences {Mj} is acquired by slow-sparse model-rate of sampling.
8. The method ofclaim 7 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
9. The method ofclaim 1 wherein employing the optimal matching algorithm for finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. indexing into a hash table, each input vector tnkthat belongs to the input vectors sequence Tnk, and retrieving all the model vectors mrjthat are near to each input vector tnk; using the model vectors retrieved to construct all the vector pairs (tnk, mrj) composed of vectors that are near to one another;
b. computing for each vector pair (tnk, mrj) retrieved in step a, a pair matching score that reflects the vector pair's matching quality;
c. removing from all the vector pairs retrieved all the vector pairs that have a pair matching score equal or below a predetermined threshold and retaining the remaining vector pairs and naming these remaining vector pairs as matching vector pairs;
d. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence; wherein the valid pair sequence is constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, Mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions: (I) r is not equal to e; (II) if e>r, then n≦c; (III) if e<r then c≦n;
e. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
10. The method ofclaim 9 wherein the recorded trajectories of human body parts are equivalent to recorded poses of human body parts; wherein both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from samples of recorded poses of human body parts and from temporal derivatives of samples of recorded poses of human body parts.
11. The method ofclaim 10 wherein the samples of recorded poses of human body parts and the temporal derivatives of samples of recorded poses of human body parts, correspond to samples of recorded angular poses of body parts and to temporal derivatives of samples of recorded angular poses of human body parts.
12. The method ofclaim 11 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
13. The method ofclaim 12 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
14. The method ofclaim 11 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and each model vectors sequence Mj, which is a member of the collection of model vectors sequences {Mj} is acquired by slow-sparse model-rate of sampling.
15. The method ofclaim 14 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
16. A method for recognizing an input vectors sequence Tkas most similar to a model vectors sequence Mj, which is a member of a stored collection of model vectors sequences {Mj}, comprising the steps of:
a. Creating a collection of model signal sets by measuring and recording all the model signal sets of a collection of model signal sets;
b. sampling at a model-rate of sampling, one model signal set;
c. repeating step b each time with another model signal set, until all the recorded model signal sets included in the collection of model signal sets, have been sampled;
d. representing each sample of the model signal set by a model sample vector wrj; wherein each component of such model sample vector is derived from one sample of one recorded model signal that belongs to the model signal set;
e. transforming each model sample vector wrjinto a model vector mrjusing a pre-specified transformation;
f. representing each member of the collection of model vectors sequences by a model vectors sequence Mj=(m1j. . . mrj. . . mqj); wherein the subscript q denotes the total number of model vectors in the model vectors sequence Mj; wherein the subscript r denotes the location of the model vector mrjwithin the model vectors sequence Mj; wherein the subscript j denotes the serial number of the model vectors sequence Mjwithin the collection of model vectors sequences {Mj};
g. storing in a hash table, the entire collection of model vectors sequences {Mj} by storing each of the model vectors mrjthat belongs to the collection of model vectors sequences {Mj}, in a hash table bin whose address is the nearest to the model vector mrj;
h. Acquiring an input signal set by measuring and recording an input signal set; wherein each input signal of the input signal set corresponds to a model signal of the model signal set;
i. sampling at an input-rate of sampling, the recorded input signal set; wherein the input-rate of sampling is adjusted to be substantially different from the model-rate of sampling;
j. representing each sample of the recorded input signal set by an input sample vector vnk; wherein each component of such an input sample vector is derived from one sample of a recorded input signal that belongs to the input signal set;
k. transforming each input sample vector vnkinto an input vector trjusing a pre-specified transformation;
l. representing the sampled input signal set by an input vectors sequence Tk=(t1k. . . tnk. . . tpk); wherein the subscript p denotes the total number of input vectors in the input vectors sequence; wherein the subscript n denotes the location of the input vector tnkwithin the input vectors sequence Tk; wherein the subscript k denotes the serial number of the input vectors sequence;
m. employing an optimal matching algorithm to find the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj};
n. repeating step k until all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} are found;
o. comparing all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} and finding one model vectors sequence Mothat has the highest optimal matching score; recognizing the input signal set as most similar to the model signal set that pertains to the model vectors sequence Mowith the highest optimal matching score;
17. The method ofclaim 16 wherein employing the optimal matching algorithm for finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. indexing into a hash table, each input vector tnkthat belongs to the input vectors sequence Tk, and retrieving all the model vectors mrjthat are near to each input vector tnkusing the model vectors retrieved to construct all the vector pairs (tnk, mrj) composed of vectors that are near to one another;
b. computing for each vector pair (tnk, mrj) retrieved in step a, a pair matching score that reflects the vector pair's matching quality;
c. removing from all the vector pairs retrieved all the vector pairs that have a pair matching score equal or below a predetermined threshold and retaining the remaining vector pairs and naming these remaining vector pairs as matching vector pairs;
d. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence; wherein the valid pair sequence is constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, Mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions:
(I) c is not equal to n (II) if c>n, then r≦e; (III) if c<n then e≦r
e. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
18. The method ofclaim 17 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
19. The method ofclaim 18 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
20. The method ofclaim 19 wherein both the input vectors sequence Tkand the collection of model vectors sequences {Mj} represent speech utterances and both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from values of sampled speech signals, discrete transforms of sampled speech signals and their temporal derivatives.
21. The method ofclaim 17 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and acquiring by a slow-sparse model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
22. The method ofclaim 21 wherein the highest sequence matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
23. The method ofclaim 22 wherein both the input vectors sequence Tkand the collection of model vectors sequences {Mj} represent speech utterances and both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from values of sampled speech signals, discrete transforms of sampled speech signals and their temporal derivatives.
24. The method ofclaim 16 wherein employing the optimal matching algorithm for finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. indexing into a hash table, each input vector tnkthat belongs to the input vectors sequence Tk, and retrieving all the model vectors mrjthat are near to each input vector tnk; using the model vectors retrieved to construct all the vector pairs (tnk, mrj) composed of vectors that are near to one another;
b. computing for each vector pair (tnk, mrj) retrieved in step a, a pair matching score that reflects the vector pair's matching quality;
c. removing from all the vector pairs retrieved all the vector pairs that have a pair matching score equal or below a predetermined threshold and retaining the remaining vector pairs and naming these remaining vector pairs as matching vector pairs;
d. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence; wherein the valid pair sequence is constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions:
(I) r is not equal to e; (II) if e>r, then n≦c; (III) if e<r then c≦n;
e. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
25. The method ofclaim 24 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
26. The method ofclaim 25 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
27. The method ofclaim 26 wherein both the input vectors sequence Tkand the collection of model vectors sequences {Mj} represent speech utterances and both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from values of sampled speech signals, discrete transforms of sampled speech signals and their temporal derivatives.
28. The method ofclaim 24 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and acquiring by a slow-sparse model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
29. The method ofclaim 28 wherein the highest sequence matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
30. The method ofclaim 29 wherein both the input vectors sequence Tkand the collection of model vectors sequences {Mj} represent speech utterances and both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from values of sampled speech signals, discrete transforms of sampled speech signals and their temporal derivatives.
31. A method for recognizing an input human motion as most similar to a model human motion, which is a member of a stored collection of model human motions, comprising the steps of:
a. Creating a collection of model human motions by measuring and recording the model trajectories of body parts that pertain to human performances of such collection of model human motions;
b. sampling at a model-rate of sampling, one recorded model trajectories of body parts;
c. repeating step b each time with another recorded model trajectories of body parts, until all the recorded model trajectories of body parts included in the collection of model human motions, have been sampled;
d. representing each sample of the recorded model trajectories of body parts by a model vector mrj; wherein each component of such model vector is derived from a sample of a recorded model trajectory of one body part;
e. representing each member of the collection of model human motions by a model vectors sequence Mj=(m1j. . . mrj. . . mqj); wherein the subscript q denotes the total number of model vectors in the model vectors sequence Mj; wherein the subscript r denotes the location of the model vector mrjwithin the model vectors sequence Mj; wherein the subscript j denotes the serial number of the model vectors sequence Mjwithin the collection of model vectors sequences {Mj} that represents the corresponding collection of model human motions;
f. Acquiring an input human motion by measuring and recording the input trajectories of body parts that pertain to a human performance of such input human motion;
g. sampling at an input-rate of sampling, the recorded input trajectories of body parts; wherein the input-rate of sampling is set to be substantially different from the model-rate of sampling;
h. representing each sample of the recorded input trajectories of body parts by an input vector tnk; wherein each component of such an input vector is derived from a sample of a recorded input trajectory of the same body part that pertains to the corresponding component of the model vector mrj;
i. representing the sampled input human motion by an input vectors sequence Tk=(t1k. . . tnk. . . tpk); wherein the subscript p denotes the total number of input vectors in the input vectors sequence; wherein the subscript n denotes the location of the input vector tnkwithin the input vectors sequence Tk; wherein the subscript k denotes the serial number of the input vectors sequence;
j. dividing the full set of human body parts into several groups of human body parts;
k. in complete correspondence to the division into groups of human body parts, also dividing each of the input vectors and each of the model vectors into input sub-vectors and corresponding model sub-vectors; wherein each sub-vector pertains to another group of human body parts;
l. storing each division of model sub-vectors, which corresponds to the same group of body parts, in a separate hash sub-table; wherein the hash sub-table has addressing sub-vectors that correspond in their dimensions to the dimensions of the model sub-vectors stored; wherein each model sub-vector is stored in a hash sub-table bin whose address sub-vector is the nearest to the model sub-vector; wherein the bins have address sub-vectors that are determined by a process of vector quantization;
m. indexing each of the input vectors tnkwithin the input vectors sequence Tkby using each of its input sub-vectors to index separately into the corresponding hash sub-table;
n. for each input vector tnkwithin the input vectors sequence Tk, retrieving the corresponding model sub-vectors extracted by separately indexing with input sub-vectors that are parts of the same input vector tnkand merging them back into model vectors mrj; creating from each retrieved and merged model vector mrja vector pair (tnk, mrj);
o. computing for each vector pair (tnk, mrj) created in step n, a pair matching score that reflects the vector pair's matching quality;
p. removing from all the vector pairs created in step n, all the vector pairs that have a pair matching score equal or below a predetermined threshold and retaining the remaining vector pairs and naming these vector pairs as matching vector pairs;
q. employing an optimal matching algorithm to find the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mjwhich is one of the members of the collection of model vectors sequences {Mj};
r. repeating step q until all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} are found;
s. comparing all the optimal matching scores that pertain to all the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj} and finding one model vectors sequence Mnthat has the highest optimal matching score; recognizing the input human motion as most similar to the model human motion that pertains to the model vectors sequence Mnwith the highest optimal matching score;
32. The method ofclaim 31 wherein finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. constructing from the matching vector pairs, all the permutations of valid pair sequences; wherein all the valid pair sequences are constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions: (I) c is not equal to n (II) if c>n, then r≦e; (III) if c<n then e≦r;
b. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence;
c. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
33. The method ofclaim 32 wherein the recorded trajectories of human body parts are equivalent to recorded poses of human body parts; wherein both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from samples of recorded poses of human body parts and from temporal derivatives of samples of recorded poses of human body parts.
34. The method ofclaim 33 wherein the samples of recorded poses of human body parts and the temporal derivatives of samples of recorded poses of human body parts, correspond to samples of recorded angular poses of body parts and to temporal derivatives of samples of recorded angular poses of human body parts.
35. The method ofclaim 34 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
36. The method ofclaim 35 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
37. The method ofclaim 34 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and acquiring by a slow-sparse model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
38. The method ofclaim 37 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
39. The method ofclaim 31 wherein finding the optimal matching score between the input vectors sequence Tkand one model vectors sequence Mj, which is one of the members of the collection of model vectors sequences {Mj}, comprising of the steps:
a. constructing from the matching vector pairs, all the permutations of valid pair sequences; wherein all the valid pair sequences are constructed only from sequences of matching vector pairs that have mutual sequential relation; wherein the mutual sequential relation between any two matching vector pairs . . . (tnk, mrj) . . . (tck, mej) . . . included in the valid pair sequence has to fulfill the following 3 conditions: (I) r is not equal to e; (II) if e>r, then n≦c; (III) if e<r then c≦n;
b. employing an optimization algorithm to find the valid pair sequence with the highest sequence matching score; wherein a sequence matching score of a valid pair sequence is defined as the sum of all the pair matching scores that pertain to the matching vector pairs that are components of the valid pair sequence;
c. naming the valid pair sequence with the highest sequence matching score as the optimal pair sequence with the optimal matching score;
40. The method ofclaim 39 wherein the recorded trajectories of human body parts are equivalent to recorded poses of human body parts; wherein both the collection of model vectors sequences {Mj} and the input vectors sequence Tkare derived from samples of recorded poses of human body parts and from temporal derivatives of samples of recorded poses of human body parts.
41. The method ofclaim 40 wherein the samples of recorded poses of human body parts and the temporal derivatives of samples of recorded poses of human body parts, correspond to samples of recorded angular poses of body parts and to temporal derivatives of samples of recorded angular poses of human body parts.
42. The method ofclaim 41 wherein the input vectors sequence Tkis acquired by a sparse-slow input-rate of sampling and acquiring by a fast model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
43. The method ofclaim 42 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
44. The method ofclaim 41 wherein the input vectors sequence Tkis acquired by a fast input-rate of sampling and acquiring by a slow-sparse model-rate of sampling each of the model vectors sequences Mj, which are members of the collection of model vectors sequences {Mj}.
45. The method ofclaim 44 wherein the optimal matching scores are obtained by a special optimization algorithm that employs principles of dynamic programming.
US10/427,8822002-05-062003-05-01Method of recognition of human motion, vector sequences and speechExpired - Fee RelatedUS7366645B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US10/427,882US7366645B2 (en)2002-05-062003-05-01Method of recognition of human motion, vector sequences and speech

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US37831602P2002-05-062002-05-06
US38100202P2002-05-152002-05-15
US10/427,882US7366645B2 (en)2002-05-062003-05-01Method of recognition of human motion, vector sequences and speech

Publications (2)

Publication NumberPublication Date
US20030208289A1true US20030208289A1 (en)2003-11-06
US7366645B2 US7366645B2 (en)2008-04-29

Family

ID=29273653

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US10/427,882Expired - Fee RelatedUS7366645B2 (en)2002-05-062003-05-01Method of recognition of human motion, vector sequences and speech

Country Status (1)

CountryLink
US (1)US7366645B2 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050081160A1 (en)*2003-10-092005-04-14Wee Susie J.Communication and collaboration system using rich media environments
US20060098865A1 (en)*2004-11-052006-05-11Ming-Hsuan YangHuman pose estimation with data driven belief propagation
US20070098254A1 (en)*2005-10-282007-05-03Ming-Hsuan YangDetecting humans via their pose
US20070244630A1 (en)*2006-03-062007-10-18Kabushiki Kaisha ToshibaBehavior determining apparatus, method, and program
US20080288493A1 (en)*2005-03-162008-11-20Imperial Innovations LimitedSpatio-Temporal Self Organising Map
US20090046153A1 (en)*2007-08-132009-02-19Fuji Xerox Co., Ltd.Hidden markov model for camera handoff
WO2009026337A1 (en)*2007-08-202009-02-26Gesturetek, Inc.Enhanced rejection of out-of-vocabulary words
US20090051648A1 (en)*2007-08-202009-02-26Gesturetek, Inc.Gesture-based mobile interaction
US20090263009A1 (en)*2008-04-222009-10-22Honeywell International Inc.Method and system for real-time visual odometry
US20110080336A1 (en)*2009-10-072011-04-07Microsoft CorporationHuman Tracking System
US20110081045A1 (en)*2009-10-072011-04-07Microsoft CorporationSystems And Methods For Tracking A Model
US20120123733A1 (en)*2010-11-112012-05-17National Chiao Tung UniversityMethod system and computer readable media for human movement recognition
WO2012075221A1 (en)*2010-12-012012-06-07Data Engines CorporationMethod for inferring attributes of a data set and recognizers used thereon
CN103065161A (en)*2012-12-252013-04-24西南科技大学Human behavior recognition algorithm based on normalization R transformation hierarchical model
US20130108994A1 (en)*2011-11-012013-05-02PES School of EngineeringAdaptive Multimodal Communication Assist System
US8464188B1 (en)*2005-08-232013-06-11The Mathworks, Inc.Multi-rate hierarchical state diagrams
US20130243242A1 (en)*2012-03-162013-09-19Pixart Imaging IncorporationUser identification system and method for identifying user
US20130294651A1 (en)*2010-12-292013-11-07Thomson LicensingSystem and method for gesture recognition
US20140101752A1 (en)*2012-10-092014-04-10Lockheed Martin CorporationSecure gesture
US20140119640A1 (en)*2012-10-312014-05-01Microsoft CorporationScenario-specific body-part tracking
US20140267611A1 (en)*2013-03-142014-09-18Microsoft CorporationRuntime engine for analyzing user motion in 3d images
US8867820B2 (en)2009-10-072014-10-21Microsoft CorporationSystems and methods for removing a background of an image
US20140325459A1 (en)*2004-02-062014-10-30Nokia CorporationGesture control system
US8963829B2 (en)2009-10-072015-02-24Microsoft CorporationMethods and systems for determining and tracking extremities of a target
US20150109196A1 (en)*2012-05-102015-04-23Koninklijke Philips N.V.Gesture control
US20150160327A1 (en)*2013-12-062015-06-11Tata Consultancy Services LimitedMonitoring motion using skeleton recording devices
US20170103672A1 (en)*2015-10-092017-04-13The Regents Of The University Of CaliforniaSystem and method for gesture capture and real-time cloud based avatar training
US9697827B1 (en)*2012-12-112017-07-04Amazon Technologies, Inc.Error reduction in speech processing
US20190180149A1 (en)*2017-12-132019-06-13Canon Kabushiki KaishaSystem and method of classifying an action or event
US20190329790A1 (en)*2018-04-252019-10-31Uber Technologies, Inc.Systems and Methods for Using Machine Learning to Determine Passenger Ride Experience
US10628664B2 (en)2016-06-042020-04-21KinTrans, Inc.Automatic body movement recognition and association system
US20210244317A1 (en)*2018-04-262021-08-12Hitachi High-Tech CorporationWalking mode display method, walking mode display system and walking mode analyzer
US11430267B2 (en)*2017-06-202022-08-30Volkswagen AktiengesellschaftMethod and device for detecting a user input on the basis of a gesture
US11894865B2 (en)*2013-11-072024-02-06Telefonaktiebolaget Lm Ericsson (Publ)Methods and devices for vector segmentation for coding

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR100783552B1 (en)*2006-10-112007-12-07삼성전자주식회사 Method and device for input control of a mobile terminal
US20080133496A1 (en)*2006-12-012008-06-05International Business Machines CorporationMethod, computer program product, and device for conducting a multi-criteria similarity search
US8856002B2 (en)*2007-04-122014-10-07International Business Machines CorporationDistance metrics for universal pattern processing tasks
US8098891B2 (en)*2007-11-292012-01-17Nec Laboratories America, Inc.Efficient multi-hypothesis multi-human 3D tracking in crowded scenes
KR101210277B1 (en)*2008-12-232012-12-18한국전자통신연구원System for activity monitoring and information transmission method for activity monitoring
US7983450B2 (en)*2009-03-162011-07-19The Boeing CompanyMethod, apparatus and computer program product for recognizing a gesture
US8259994B1 (en)*2010-09-142012-09-04Google Inc.Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
US8908913B2 (en)*2011-12-192014-12-09Mitsubishi Electric Research Laboratories, Inc.Voting-based pose estimation for 3D sensors
KR101762010B1 (en)2015-08-282017-07-28경희대학교 산학협력단Method of modeling a video-based interactive activity using the skeleton posture datset
US10447972B2 (en)*2016-07-282019-10-15Chigru Innovations (OPC) Private LimitedInfant monitoring system
CN109271896B (en)*2018-08-302021-08-20南通理工学院Student evaluation system and method based on image recognition
CN109872359A (en)*2019-01-272019-06-11武汉星巡智能科技有限公司Sitting posture detecting method, device and computer readable storage medium
US12361813B2 (en)*2020-10-122025-07-15Nec CorporationServer device, visitor notification system, visitor notification method, and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5386492A (en)*1992-06-291995-01-31Kurzweil Applied Intelligence, Inc.Speech recognition system utilizing vocabulary model preselection
US5502774A (en)*1992-06-091996-03-26International Business Machines CorporationAutomatic recognition of a consistent message using multiple complimentary sources of information
US5577249A (en)*1992-07-311996-11-19International Business Machines CorporationMethod for finding a reference token sequence in an original token string within a database of token strings using appended non-contiguous substrings
US5581276A (en)*1992-09-081996-12-03Kabushiki Kaisha Toshiba3D human interface apparatus using motion recognition based on dynamic image processing
US5714698A (en)*1994-02-031998-02-03Canon Kabushiki KaishaGesture input method and apparatus
US6222465B1 (en)*1998-12-092001-04-24Lucent Technologies Inc.Gesture-based computer interface
US6256033B1 (en)*1997-10-152001-07-03Electric PlanetMethod and apparatus for real-time gesture recognition
US6269172B1 (en)*1998-04-132001-07-31Compaq Computer CorporationMethod for tracking the motion of a 3-D figure
US6292779B1 (en)*1998-03-092001-09-18Lernout & Hauspie Speech Products N.V.System and method for modeless large vocabulary speech recognition
US6371711B1 (en)*1999-03-192002-04-16Integrated Environmental Technologies, LlcValveless continuous atmospherically isolated container feeding assembly
US20020052742A1 (en)*2000-07-202002-05-02Chris ThrasherMethod and apparatus for generating and displaying N-best alternatives in a speech recognition system
US20020062302A1 (en)*2000-08-092002-05-23Oosta Gary MartinMethods for document indexing and analysis
US20020181711A1 (en)*2000-11-022002-12-05Compaq Information Technologies Group, L.P.Music similarity function based on signal analysis
US6542869B1 (en)*2000-05-112003-04-01Fuji Xerox Co., Ltd.Method for automatic analysis of audio including music and speech
US6571193B1 (en)*1996-07-032003-05-27Hitachi, Ltd.Method, apparatus and system for recognizing actions

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5502774A (en)*1992-06-091996-03-26International Business Machines CorporationAutomatic recognition of a consistent message using multiple complimentary sources of information
US5621809A (en)*1992-06-091997-04-15International Business Machines CorporationComputer program product for automatic recognition of a consistent message using multiple complimentary sources of information
US5386492A (en)*1992-06-291995-01-31Kurzweil Applied Intelligence, Inc.Speech recognition system utilizing vocabulary model preselection
US5577249A (en)*1992-07-311996-11-19International Business Machines CorporationMethod for finding a reference token sequence in an original token string within a database of token strings using appended non-contiguous substrings
US5581276A (en)*1992-09-081996-12-03Kabushiki Kaisha Toshiba3D human interface apparatus using motion recognition based on dynamic image processing
US5714698A (en)*1994-02-031998-02-03Canon Kabushiki KaishaGesture input method and apparatus
US6571193B1 (en)*1996-07-032003-05-27Hitachi, Ltd.Method, apparatus and system for recognizing actions
US6256033B1 (en)*1997-10-152001-07-03Electric PlanetMethod and apparatus for real-time gesture recognition
US6292779B1 (en)*1998-03-092001-09-18Lernout & Hauspie Speech Products N.V.System and method for modeless large vocabulary speech recognition
US6269172B1 (en)*1998-04-132001-07-31Compaq Computer CorporationMethod for tracking the motion of a 3-D figure
US6222465B1 (en)*1998-12-092001-04-24Lucent Technologies Inc.Gesture-based computer interface
US6371711B1 (en)*1999-03-192002-04-16Integrated Environmental Technologies, LlcValveless continuous atmospherically isolated container feeding assembly
US6542869B1 (en)*2000-05-112003-04-01Fuji Xerox Co., Ltd.Method for automatic analysis of audio including music and speech
US20020052742A1 (en)*2000-07-202002-05-02Chris ThrasherMethod and apparatus for generating and displaying N-best alternatives in a speech recognition system
US20020062302A1 (en)*2000-08-092002-05-23Oosta Gary MartinMethods for document indexing and analysis
US20020181711A1 (en)*2000-11-022002-12-05Compaq Information Technologies Group, L.P.Music similarity function based on signal analysis

Cited By (75)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050081160A1 (en)*2003-10-092005-04-14Wee Susie J.Communication and collaboration system using rich media environments
US7590941B2 (en)*2003-10-092009-09-15Hewlett-Packard Development Company, L.P.Communication and collaboration system using rich media environments
US20140325459A1 (en)*2004-02-062014-10-30Nokia CorporationGesture control system
US20060098865A1 (en)*2004-11-052006-05-11Ming-Hsuan YangHuman pose estimation with data driven belief propagation
US7212665B2 (en)2004-11-052007-05-01Honda Motor Co.Human pose estimation with data driven belief propagation
US20080288493A1 (en)*2005-03-162008-11-20Imperial Innovations LimitedSpatio-Temporal Self Organising Map
US8464188B1 (en)*2005-08-232013-06-11The Mathworks, Inc.Multi-rate hierarchical state diagrams
US20070098254A1 (en)*2005-10-282007-05-03Ming-Hsuan YangDetecting humans via their pose
US7519201B2 (en)2005-10-282009-04-14Honda Motor Co., Ltd.Detecting humans via their pose
US20070244630A1 (en)*2006-03-062007-10-18Kabushiki Kaisha ToshibaBehavior determining apparatus, method, and program
US7650318B2 (en)*2006-03-062010-01-19Kabushiki Kaisha ToshibaBehavior recognition using vectors of motion properties based trajectory and movement type
US8432449B2 (en)*2007-08-132013-04-30Fuji Xerox Co., Ltd.Hidden markov model for camera handoff
US20090046153A1 (en)*2007-08-132009-02-19Fuji Xerox Co., Ltd.Hidden markov model for camera handoff
US20090052785A1 (en)*2007-08-202009-02-26Gesturetek, Inc.Rejecting out-of-vocabulary words
US20090051648A1 (en)*2007-08-202009-02-26Gesturetek, Inc.Gesture-based mobile interaction
US9261979B2 (en)2007-08-202016-02-16Qualcomm IncorporatedGesture-based mobile interaction
US8565535B2 (en)2007-08-202013-10-22Qualcomm IncorporatedRejecting out-of-vocabulary words
WO2009026337A1 (en)*2007-08-202009-02-26Gesturetek, Inc.Enhanced rejection of out-of-vocabulary words
US8213706B2 (en)*2008-04-222012-07-03Honeywell International Inc.Method and system for real-time visual odometry
US20090263009A1 (en)*2008-04-222009-10-22Honeywell International Inc.Method and system for real-time visual odometry
US7961910B2 (en)2009-10-072011-06-14Microsoft CorporationSystems and methods for tracking a model
US8564534B2 (en)2009-10-072013-10-22Microsoft CorporationHuman tracking system
US9679390B2 (en)2009-10-072017-06-13Microsoft Technology Licensing, LlcSystems and methods for removing a background of an image
US9821226B2 (en)2009-10-072017-11-21Microsoft Technology Licensing, LlcHuman tracking system
US9659377B2 (en)2009-10-072017-05-23Microsoft Technology Licensing, LlcMethods and systems for determining and tracking extremities of a target
US20110234589A1 (en)*2009-10-072011-09-29Microsoft CorporationSystems and methods for tracking a model
US8483436B2 (en)2009-10-072013-07-09Microsoft CorporationSystems and methods for tracking a model
US9582717B2 (en)2009-10-072017-02-28Microsoft Technology Licensing, LlcSystems and methods for tracking a model
US8325984B2 (en)2009-10-072012-12-04Microsoft CorporationSystems and methods for tracking a model
US8542910B2 (en)2009-10-072013-09-24Microsoft CorporationHuman tracking system
US20110080336A1 (en)*2009-10-072011-04-07Microsoft CorporationHuman Tracking System
US9522328B2 (en)2009-10-072016-12-20Microsoft Technology Licensing, LlcHuman tracking system
US20110081045A1 (en)*2009-10-072011-04-07Microsoft CorporationSystems And Methods For Tracking A Model
US8970487B2 (en)2009-10-072015-03-03Microsoft Technology Licensing, LlcHuman tracking system
US8963829B2 (en)2009-10-072015-02-24Microsoft CorporationMethods and systems for determining and tracking extremities of a target
US8897495B2 (en)2009-10-072014-11-25Microsoft CorporationSystems and methods for tracking a model
US8861839B2 (en)2009-10-072014-10-14Microsoft CorporationHuman tracking system
US8891827B2 (en)2009-10-072014-11-18Microsoft CorporationSystems and methods for tracking a model
US8867820B2 (en)2009-10-072014-10-21Microsoft CorporationSystems and methods for removing a background of an image
US20120123733A1 (en)*2010-11-112012-05-17National Chiao Tung UniversityMethod system and computer readable media for human movement recognition
WO2012075221A1 (en)*2010-12-012012-06-07Data Engines CorporationMethod for inferring attributes of a data set and recognizers used thereon
US9646249B2 (en)2010-12-012017-05-09Data Engines CorporationMethod for inferring attributes of a data set and recognizers used thereon
US20130294651A1 (en)*2010-12-292013-11-07Thomson LicensingSystem and method for gesture recognition
US9323337B2 (en)*2010-12-292016-04-26Thomson LicensingSystem and method for gesture recognition
US8793118B2 (en)*2011-11-012014-07-29PES School of EngineeringAdaptive multimodal communication assist system
US20130108994A1 (en)*2011-11-012013-05-02PES School of EngineeringAdaptive Multimodal Communication Assist System
US11126832B2 (en)*2012-03-162021-09-21PixArt Imaging Incorporation, R.O.C.User identification system and method for identifying user
US10832042B2 (en)*2012-03-162020-11-10Pixart Imaging IncorporationUser identification system and method for identifying user
US20190303659A1 (en)*2012-03-162019-10-03Pixart Imaging IncorporationUser identification system and method for identifying user
US20130243242A1 (en)*2012-03-162013-09-19Pixart Imaging IncorporationUser identification system and method for identifying user
US9280714B2 (en)*2012-03-162016-03-08PixArt Imaging Incorporation, R.O.C.User identification system and method for identifying user
US20160140385A1 (en)*2012-03-162016-05-19Pixart Imaging IncorporationUser identification system and method for identifying user
US9483122B2 (en)*2012-05-102016-11-01Koninklijke Philips N.V.Optical shape sensing device and gesture control
US20150109196A1 (en)*2012-05-102015-04-23Koninklijke Philips N.V.Gesture control
US20140101752A1 (en)*2012-10-092014-04-10Lockheed Martin CorporationSecure gesture
US9195813B2 (en)*2012-10-092015-11-24Lockheed Martin CorporationSecure gesture
US20140119640A1 (en)*2012-10-312014-05-01Microsoft CorporationScenario-specific body-part tracking
US8867786B2 (en)*2012-10-312014-10-21Microsoft CorporationScenario-specific body-part tracking
US20150029097A1 (en)*2012-10-312015-01-29Microsoft CorporationScenario-specific body-part tracking
US9489042B2 (en)*2012-10-312016-11-08Microsoft Technology Licensing, LlcScenario-specific body-part tracking
US9697827B1 (en)*2012-12-112017-07-04Amazon Technologies, Inc.Error reduction in speech processing
CN103065161A (en)*2012-12-252013-04-24西南科技大学Human behavior recognition algorithm based on normalization R transformation hierarchical model
US20140267611A1 (en)*2013-03-142014-09-18Microsoft CorporationRuntime engine for analyzing user motion in 3d images
US12255669B2 (en)*2013-11-072025-03-18Telefonaktiebolaget Lm Ericsson (Publ)Methods and devices for vector segmentation for coding
US20240275401A1 (en)*2013-11-072024-08-15Telefonaktiebolaget Lm Ericsson (Publ)Methods and devices for vector segmentation for coding
US11894865B2 (en)*2013-11-072024-02-06Telefonaktiebolaget Lm Ericsson (Publ)Methods and devices for vector segmentation for coding
US20150160327A1 (en)*2013-12-062015-06-11Tata Consultancy Services LimitedMonitoring motion using skeleton recording devices
US20170103672A1 (en)*2015-10-092017-04-13The Regents Of The University Of CaliforniaSystem and method for gesture capture and real-time cloud based avatar training
US10891472B2 (en)2016-06-042021-01-12KinTrans, Inc.Automatic body movement recognition and association system
US10628664B2 (en)2016-06-042020-04-21KinTrans, Inc.Automatic body movement recognition and association system
US11430267B2 (en)*2017-06-202022-08-30Volkswagen AktiengesellschaftMethod and device for detecting a user input on the basis of a gesture
US11106947B2 (en)*2017-12-132021-08-31Canon Kabushiki KaishaSystem and method of classifying an action or event
US20190180149A1 (en)*2017-12-132019-06-13Canon Kabushiki KaishaSystem and method of classifying an action or event
US20190329790A1 (en)*2018-04-252019-10-31Uber Technologies, Inc.Systems and Methods for Using Machine Learning to Determine Passenger Ride Experience
US20210244317A1 (en)*2018-04-262021-08-12Hitachi High-Tech CorporationWalking mode display method, walking mode display system and walking mode analyzer

Also Published As

Publication numberPublication date
US7366645B2 (en)2008-04-29

Similar Documents

PublicationPublication DateTitle
US7366645B2 (en)Method of recognition of human motion, vector sequences and speech
Li et al.Sign transition modeling and a scalable solution to continuous sign language recognition for real-world applications
US5544257A (en)Continuous parameter hidden Markov model approach to automatic handwriting recognition
Ong et al.Automatic sign language analysis: A survey and the future beyond lexical meaning
GravesConnectionist temporal classification
Ivanov et al.Recognition of visual activities and interactions by stochastic parsing
CN101101752B (en) A lip-reading recognition system for monosyllabic languages based on visual features
Hu et al.Writer independent on-line handwriting recognition using an HMM approach
Kulkarni et al.Continuous action recognition based on sequence alignment
Gao et al.Transition movement models for large vocabulary continuous sign language recognition
Wu et al.A novel lip descriptor for audio-visual keyword spotting based on adaptive decision fusion
EP1671277A1 (en)System and method for audio-visual content synthesis
Elakkiya et al.Subunit sign modeling framework for continuous sign language recognition
BrandAn entropic estimator for structure discovery
Luettin et al.Continuous audio-visual speech recognition
Potamianos et al.Joint audio-visual speech processing for recognition and enhancement.
Liu et al.Audio-visual keyword spotting based on adaptive decision fusion under noisy conditions for human-robot interaction
Zhao et al.Learning a highly structured motion model for 3D human tracking
Roy et al.Learning audio-visual associations using mutual information
Zahedi et al.Geometric Features for Improving Continuous Appearance-based Sign Language Recognition.
Rigoll et al.An investigation of context-dependent and hybrid modeling techniques for very large vocabulary on-line cursive handwriting recognition
PalečekExperimenting with lipreading for large vocabulary continuous speech recognition
Chen et al.Joint audio-video driven facial animation
RajahChereme-based recognition of isolated, dynamic gestures from South African Sign Language with Hidden Markov Models
AbdelazizTurbo Decoders for Audio-Visual Continuous Speech Recognition.

Legal Events

DateCodeTitleDescription
REMIMaintenance fee reminder mailed
FPAYFee payment

Year of fee payment:4

SULPSurcharge for late payment
REMIMaintenance fee reminder mailed
LAPSLapse for failure to pay maintenance fees
STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20160429


[8]ページ先頭

©2009-2025 Movatter.jp