Movatterモバイル変換


[0]ホーム

URL:


US12204695B2 - Dynamic, free-space user interactions for machine control - Google Patents

Dynamic, free-space user interactions for machine control
Download PDF

Info

Publication number
US12204695B2
US12204695B2US18/219,517US202318219517AUS12204695B2US 12204695 B2US12204695 B2US 12204695B2US 202318219517 AUS202318219517 AUS 202318219517AUS 12204695 B2US12204695 B2US 12204695B2
Authority
US
United States
Prior art keywords
gesture
control object
user
virtual
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/219,517
Other versions
US20240061511A1 (en
Inventor
Raffi Bedikian
Jonathan Marsden
Keith Mertens
David Holz
Maxwell Sills
Matias Perez
Gabriel Hare
Ryan Julian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ultrahaptics IP Two Ltd
LMI Liquidating Co LLC
Original Assignee
Ultrahaptics IP Two Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/154,730external-prioritypatent/US9501152B2/en
Application filed by Ultrahaptics IP Two LtdfiledCriticalUltrahaptics IP Two Ltd
Priority to US18/219,517priorityCriticalpatent/US12204695B2/en
Publication of US20240061511A1publicationCriticalpatent/US20240061511A1/en
Assigned to Ultrahaptics IP Two LimitedreassignmentUltrahaptics IP Two LimitedASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: LMI LIQUIDATING CO. LLC
Assigned to LMI LIQUIDATING CO. LLCreassignmentLMI LIQUIDATING CO. LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: LEAP MOTION, INC.
Assigned to LEAP MOTION, INC.reassignmentLEAP MOTION, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: BEDIKIAN, RAFFI, HARE, GABRIEL, HOLZ, David, JULIAN, RYAN, MARSDEN, JONATHAN, MERTENS, KEITH, PEREZ, Matias, SILLS, Maxwell
Priority to US18/988,746prioritypatent/US20250130648A1/en
Application grantedgrantedCritical
Publication of US12204695B2publicationCriticalpatent/US12204695B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Embodiments of display control based on dynamic user interactions generally include capturing a plurality of temporally sequential images of the user, or a body part or other control object manipulated by the user, and computationally analyzing the images to recognize a gesture performed by the user. In some embodiments, the gesture is identified as an engagement gesture, and compared with reference gestures from a library of reference gestures. In some embodiments, a degree of completion of the recognized engagement gesture is determined, and the display contents are modified in accordance therewith. In some embodiments, a dominant gesture is computationally determined from among a plurality of user gestures, and an action displayed on the device is based on the dominant gesture.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 17/666,534, filed Feb. 7, 2022, which is a continuation of U.S. patent application Ser. No. 16/195,755, filed Nov. 19, 2018, which is a continuation of U.S. patent application Ser. No. 15/279,363, filed Sep. 28, 2016, which is a continuation of U.S. patent application Ser. No. 14/155,722, filed Jan. 15, 2014. U.S. patent application Ser. No. 14/155,722 claims priority to and the benefit of, and incorporates herein by reference in their entireties, U.S. Provisional Application Nos. 61/825,515 and 61/825,480, both filed on May 20, 2013; No. 61/873,351, filed on Sep. 3, 2013; No. 61/877,641, filed on Sep. 13, 2013; No. 61/816,487, filed on Apr. 26, 2013; No. 61/824,691, filed on May 17, 2013; Nos. 61/752,725, 61/752,731, and 61/752,733, all filed on Jan. 15, 2013; No. 61/791,204, filed on Mar. 15, 2013; Nos. 61/808,959 and 61/808,984, both filed on Apr. 5, 2013; and No. 61/872,538, filed on Aug. 30, 2013. U.S. patent application Ser. No. 14/155,722 is a Continuation-in-Part of U.S. patent application Ser. No. 14/154,730, filed Jan. 14, 2014.
FIELD OF THE TECHNOLOGY DISCLOSED
Embodiments relate generally to machine-user interfaces, and more specifically to the interpretation of free-space user movements as control inputs.
BACKGROUND
Current computer systems typically include a graphic user interface that can be navigated by a cursor, i.e., a graphic element displayed on the screen and movable relative to other screen content, and which serves to indicate a position on the screen. The cursor is usually controlled by the user via a computer mouse or touch pad. In some systems, the screen itself doubles as an input device, allowing the user to select and manipulate graphic user interface components by touching the screen where they are located. While touch may be convenient and relatively intuitive for many users, touch is not that accurate. Fingers are fat. The user's fingers can easily cover multiple links on a crowded display leading to erroneous selection. Touch is also unforgiving—it requires the user's motions to be confined to specific areas of space. For example, move one's hand merely one key-width to the right or left and type. Nonsense appears on the screen.
Mice, touch pads, and touch screens can be cumbersome and inconvenient to use. Touch pads and touch screens require the user to be in close physical proximity to the pad (which is often integrated into a keyboard) or screen so as to be able to reach them, which significantly restricts users' range of motion while providing input to the system. Touch is, moreover, not always reliably detected, sometimes necessitating repeated motions across the pad or screen to effect the input. Mice facilitate user input at some distance from the computer and screen (determined by the length of the connection cable or the range of the wireless connection between computer and mouse), but require a flat surface with suitable surface properties, or even a special mouse pad, to function properly. Furthermore, prolonged use of a mouse, in particular if it is positioned sub-optimally relative to the user, can result in discomfort or even pain.
Accordingly, alternative input mechanisms that provide users with the advantages of intuitive controls but free the user from the many disadvantages of touch based control are highly desirable.
SUMMARY
Aspects of the system and methods described herein provide for improved machine interface and/or control by interpreting the positions, configurations, and/or motions of one or more control objects (or portions thereof) in free space within a field of view of an image-capture device. The control object(s) may be or include a user's body part(s) such as, e.g., the user's hand(s), finger(s), thumb(s), head, etc.; a suitable hand-held pointing device such as a stylus, wand, or some other inanimate object; or generally any animate or inanimate object or object portion (or combinations thereof) manipulated by the user for the purpose of conveying information to the machine. In various embodiments, the shapes, positions, and configurations of one or more control objects are reconstructed in three dimensions (e.g., based on a collection of two-dimensional images corresponding to a set of cross-sections of the object), and tracked as a function of time to discern motion. The shape, configuration, position(s), and motion(s) of the control object(s), when constituting user input to the machine, are herein referred to as “gestures.”
In embodiments, the position, orientation, and/or motion of one or more control objects are tracked relative to one or more virtual control constructs (e.g., virtual control surfaces) defined in space (e.g., programmatically) to facilitate determining whether an engagement gesture has occurred. Engagement gestures can include engaging with a control (e.g., selecting a button or switch), disengaging with a control (e.g., releasing a button or switch), motions that do not involve engagement with any control (e.g., motion that is tracked by the system, possibly followed by a cursor, and/or a single object in an application or the like), environmental interactions (i.e., gestures to direct an environment rather than a specific control, such as scroll up/down), special-purpose gestures (e.g., brighten/darken screen, volume control, etc.), as well as others or combinations thereof.
Engagement gestures can be mapped to one or more controls of a machine or application executing on a machine, or a control-less screen location, of a display device associated with the machine under control. Embodiments provide for mapping of movements in three-dimensional (3D) space conveying control and/or other information to zero, one, or more controls. Controls can include imbedded controls (e.g., sliders, buttons, and other control objects in an application) or environmental-level controls (e.g., windowing controls, scrolls within a window, and other controls affecting the control environment). In embodiments, controls may be displayable using two-dimensional (2D) presentations (e.g., a traditional cursor symbol, cross-hairs, icon, graphical representation of the control object, or other displayable object) on, e.g., one or more display screens, and/or 3D presentations using holography, projectors, or other mechanisms for creating 3D presentations. Presentations may also be audible (e.g., mapped to sounds, or other mechanisms for conveying audible information) and/or haptic.
In an embodiment, determining whether motion information defines an engagement gesture can include finding an intersection (also referred to as a contact, pierce, or a “virtual touch”) of motion of a control object with a virtual control surface, whether actually detected or determined to be imminent; dis-intersection (also referred to as a “pull back” or “withdrawal”) of the control object with a virtual control surface; a non-intersection—i.e., motion relative to a virtual control surface (e.g., wave of a hand approximately parallel to the virtual surface to “erase” a virtual chalk board); or other types of identified motions relative to the virtual control surface suited to defining gestures conveying information to the machine. In an embodiment, determining whether motion information defines an engagement gesture can include determining one or more engagement attributes from the motion information about the control object. In an embodiment, engagement attributes include motion attributes (e.g., speed, acceleration, duration, distance, etc.), gesture attributes (e.g., hand, two hands, tools, type, precision, etc.), other attributes and/or combinations thereof. In an embodiment, determining whether motion information defines an engagement gesture can include filtering motion information to determine whether motion comprises an engagement gesture. Filtering may be applied based upon engagement attributes, characteristics of motion, position in space, other criteria, and/or combinations thereof. Filtering can enable identification of engagement gestures, discrimination of engagement gestures from extraneous motions, discrimination of engagement gestures of differing types or meanings, and so forth.
Various embodiments provide high detection sensitivity for the user's gestures to allow the user to accurately and quickly (i.e., without any unnecessary delay time) control an electronic device using gestures of a variety of types and sensitivities (e.g., motions of from a few millimeters to over a meter) and, in some embodiments, to control the relationship between the physical span of a gesture and the resulting displayed response. The user's intent may be identified by, for example, comparing the detected gesture against a set of gesture primitives or other definitions that can be stored in a database. Each gesture primitive relates to a detected characteristic or feature of one or more gestures. Primitives can be coded, for example, as one or more vectors, scalars, tensors, and so forth indicating information about an action, command or other input, which is processed by the currently running application—e.g., to invoke a corresponding instruction or instruction sequence, which is thereupon executed, or to provide a parameter value or other input data. Because some gesture-recognition embodiments can provide high detection sensitivity, fine distinctions such as relatively small movements, accelerations, decelerations, velocities, and combinations thereof of a user's body part (e.g., a finger) or other control object can be accurately detected and recognized, thereby allowing the user to accurately interact with an electronic device and/or the applications executed and/or displayed thereon using a comparatively rich vocabulary of gestures.
In some embodiments, the gesture-recognition system provides functionality for the user to statically or dynamically adjust the relationship between the user's actual motion and a resulting response, e.g., object movement displayed on the electronic device's screen. In static operation, the user manually sets this sensitivity level by manipulating a displayed slide switch or other icon using, for example, the gesture-recognition system described herein. In dynamic operation, the system automatically responds to the distance between the user and the device, the nature of the activity being displayed, the available physical space, and/or the user's own pattern of response (e.g., scaling the response based on the volume of space in which the user's gestures appear to be confined). For example, when limited space is available, the relationship may be adjusted, automatically or manually by the user, to a ratio smaller than one (e.g., 1:10), such that each unit (e.g., one millimeter) of the user's actual movement results in ten units (e.g., 10 pixels or 10 millimeters) of object movement displayed on the screen. Similarly, when the user is relatively close to the electronic device, the user may adjust (or the device, sensing the user's distance, may autonomously adjust) the relationship to a ratio larger than one (e.g., 10:1) to compensate. Accordingly, adjusting the ratio of the user's actual motion to the resulting action (e.g., object movement) displayed on the screen provides extra flexibility for the user to remotely command the electric device and/or control the virtual environment displayed thereon.
In some embodiments, the system enables or provides an on-screen indicator showing in real time the degree of gesture completion, providing feedback letting the user know when a particular action is accomplished (e.g., a control is selected or a certain control manipulation effected). For example, the gesture-recognition system may recognize the gesture by matching it to a database record that includes multiple images, each of which is associated with a degree (e.g., from 1% to 100%) of completion of the performed gesture. The degree of completion of the performed gesture is then rendered on the screen. For example, as the user moves a finger closer to an electronic device to perform a clicking or touching gesture, the device display may show a hollow circular icon that a rendering application gradually fills in with a color indicating how close the user's motion is to completing the gesture. When the user has fully performed the clicking or touching gesture, the circle is entirely filled in; this may result in, for example, labeling the desired virtual object as a chosen object. The degree-of-completion indicator thus enables the user to recognize the exact moment when the virtual object is selected.
Some embodiments discern, in real time, a dominant gesture from unrelated movements that may each qualify as a gesture, and may output a signal indicative of the dominant gesture. In various embodiments, the gesture-recognition system identifies a user's dominant gesture when more than one gesture (e.g., an arm-waving gesture and a finger-flexing gesture) is detected. For example, the gesture-recognition system may computationally represent the waving gesture as a waving trajectory and the finger-flexing gestures as five separate (and smaller) trajectories. Each trajectory may be converted into a vector along, for example, six Euler degrees of freedom in Euler space. The vector with the largest magnitude represents the dominant component of the motion (e.g., waving in this case) and the rest of vectors may be ignored. In some embodiments, a vector filter that can be implemented using conventional filtering techniques is applied to the multiple vectors to filter out the small vectors and identify the dominant vector. This process may be repetitive, iterating until one vector—the dominant component of the motion—is identified. The identified dominant component can then be used to manipulate the electronic device or the applications thereof.
Accordingly, in one aspect, embodiments provide a method of controlling a machine. The method includes sensing a variation of position of at least one control object using an imaging system; determining from the variation one or more primitives describing at least one of a motion made by the control object and the character of the control object; comparing the primitive(s) to one or more templates in a library of gesture templates; selecting from a result of the comparing a set of templates of possible gestures corresponding to the one or more primitives; and providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation. The one or more control objects may include a body part of a user.
In some embodiments, sensing a variation of position of at least one control object using an imaging system comprises capturing a plurality of temporally sequential images of one or more control objects manipulated by the user. Determining from the variation one or more primitives describing a motion made by the control object and/or the character of the control object may involve computationally analyzing the images of the control object(s) to recognize a gesture primitive including at least a portion of a trajectory (trajectory portion) describing motion made by the control object. The analysis may include identifying a scale associated with the gesture primitive, the scale being indicative of an actual distance traversed by the control object; the scale may be identified, for instance, by comparing the recognized gesture with records in a gesture database, which may include a series of electronically stored records each relating a gesture to an input parameter. The gestures may be stored in the records as vectors. The analysis may further include computationally determining a ratio between the scale and a displayed movement corresponding to an action to be displayed on a presentation device. The action may then be displayed based on the ratio. The ratio may be adjusted based on an external parameter such as, e.g., the actual gesture distance, or the ratio of a pixel distance in the captured images corresponding to performance of the gesture to the size, in pixels, of the display screen. Analyzing the images of the control object(s) may also include identifying a shape and position of the control object(s) in the images, and reconstructing the position and the shape of the control object(s) in 3D space based on correlations between the identified shapes and positions of the control object(s) in the images. The method may also involve defining a 3D model of the control object(s), the position and shape of the control object(s) may be reconstructed in 3D space based on the 3D model. In some embodiments, analyzing the images of the control object(s) further includes temporally combining the reconstructed positions and shapes of the control object(s) in 3D space. In certain embodiments, determining from the variation one or more primitives describing a motion made by the control object and/or the character of the control object comprises determining a position or motion of the control object(s) relative to a virtual control construct.
Comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of frequency components (e.g., by applying Fourier analysis to the trajectory portion as a signal over time to determine the set of frequency components), and searching for the set of frequency components among the template(s) stored in the library. Alternatively or additionally, comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of frequency components, fitting a set of one or more functions to a set of frequency components representing at least a portion of a trajectory (e.g., fitting a Gaussian function to the set of frequency components), and searching for the set of functions among the template(s) stored in the library. In yet another alternative implementation, comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of time dependent frequency components (e.g., by applying wavelet analysis to the trajectory portion as a signal over time), and searching for the set of time dependent frequency components among the template(s) stored in the library. In yet another embodiment, comparing the primitive(s) to one or more templates in a library of gesture templates includes distorting at least a portion of a trajectory based at least in part upon frequency of motion components, and searching for the distorted trajectory among the template(s) stored in the library.
In some embodiments, selecting from a result of the comparison a set of templates of possible gestures corresponding to the primitive(s) involves determining a similarity between the one or more primitives and the set of templates by applying at least one similarity determiner (such as a correlation, a convolution, and/or a dot product), and providing the similarity as an indication of quality of correspondence between the primitives and the set of templates. Selecting a set of templates may also include performing at least one of scaling and shifting to at least one of the primitives and the set of templates. Further, selecting a set of templates may involve disassembling at least a portion of a trajectory into a set of frequency components, filtering the set of frequency components to remove motions associated with jitter (e.g., by applying a Frenet-Serret filter), and searching for the filtered set of frequency components among the template(s) stored in the library.
In various embodiments, the method further includes computationally determining a degree of completion of at least one gesture, and modifying contents of a display in accordance with the determined degree of completion; the contents may include, e.g., an icon, a bar, a color gradient, or a color brightness. Further, the degree of completion may be compared to a threshold value, and a command to be performed upon the degree of completion may be indicated. Further, an action responsive to the gesture may be displayed based on the degree of gesture completion and in accordance with a physics simulation model and/or a motion model (which may be constructed, e.g., based on a simulated physical force, gravity, and/or a friction force).
In various embodiments, the method further includes computationally determining a dominant gesture (e.g., by filtering the plurality of gestures); and presenting an action on a presentation device based on the dominant gesture. For instance, each of the gestures may be computationally represented as a trajectory, and each trajectory may be computationally represented as a vector along six Euler degrees of freedom in Euler space, the vector having a largest magnitude being determined to be the dominant gesture.
In some embodiments, providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation comprises filtering one or more gestures based at least in part upon one or more characteristics to determine a set of gestures of interest, and providing the set of gestures of interest (e.g., via an API). The characteristics may include the configuration, shape, and/or position of an object making the gesture. Gestures may be associated with primitives in a data structure.
In some embodiments, providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation further includes detecting a conflict between a template corresponding to a user-defined gesture and a template corresponding to a predetermined gesture; and applying a resolution determiner to resolve the conflict, e.g., by ignoring a predetermined gesture when the conflict is between a predetermined gesture and a user-defined gesture and/or by providing the user-defined gesture when the conflict is between a predetermined gesture and a user-defined gesture.
In another aspect, embodiments relate to a system enabling dynamic user interactions with a device having a display screen. The system includes at least one camera oriented toward a field of view and at least one source to direct illumination onto at least one control object in the field of view. Further, the system includes a gesture database comprising a series of electronically stored records, each of the records relating a gesture to an input parameter, and an image analyzer coupled to the camera and the database. The image analyzer is generally any suitable combination of hardware and/or software for performing the functions of the methods described above (including, e.g., image analysis and gesture recognition). The image analyzer is configured to operate the camera to capture a plurality of temporally sequential images of the control object(s); analyze the images of the control object(s) to recognize a gesture performed by the user; compare the recognized gesture with records in the gesture database to identify an input parameter associated therewith, the input parameter corresponding to an action for display on the display screen in accordance with a ratio between an actual gesture distance traversed in performance of the gesture and a displayed movement corresponding to the action; and adjust the ratio based on an external parameter. The external parameter may be the actual gesture distance, or a ratio of a pixel distance in the captured images corresponding to performance of the gesture to a size, in pixels, of the display screen. The ratio may be local to each gesture and may be stored in each gesture record in the database, or the ratio may be global across all gestures in the gesture database.
The image analyzer may be further configured to (i) identify shapes and positions of the at least one control object in the images and (ii) reconstruct a position and a shape of the at least one control object in 3D space based on correlations between the identified shapes and positions of the at least one control object in the images. Further, the image analyzer may be configured to define a 3D model of the control object(s) and reconstruct the position and shape of the control object(s) in 3D space based on the 3D model, and/or to estimate a trajectory of the at least one control object in 3D space. In some embodiment, the image analyzer is further configured to determine a position or motion of the control object(s) relative to a virtual control construct.
In various embodiments, a system enabling dynamic user interactions with a device includes one or more cameras and sources (e.g., light sources or sonic source) for direct illumination (broadly understood, e.g., so as to include irradiation with ultrasound) of one or more control objects; a gesture database comprising a series of electronically stored records, each specifying a gesture; and an image analyzer coupled to the camera and the database and configured to operate the camera to capture a plurality of images of the control object(s); analyze the images to recognize a gesture; compare the recognized gesture records in a gesture database to identify the gesture; determine a degree of completion of the recognized gesture; and display an indicator (such as an icon, a bar, a color gradient, or a color brightness) on a screen of the device reflecting the determined degree of completion. The image analyzer may be further configured to determine whether the degree of completion is above a predetermined threshold value and, if so, to cause the device to take a completion-triggered action. Further, the image analyzer may be further configured to display an action responsive to the gesture in accordance with a physics simulation model and based on the degree of gesture completion. The displayed action may be further based on a motion model. The image analyzer may be further configured to determine a position or motion of the control object(s) relative to a virtual control construct.
In various embodiments, a system of controlling dynamic user interactions with a device one or more cameras and (e.g., light or sonic) sources for direct illumination (again, broadly understood) of one or more control object(s) manipulated by the user in the field of view; a gesture database comprising a series of electronically stored records each specifying a gesture; and an image analyzer coupled to the camera and the database and configured to operate the camera to capture a plurality of temporally sequential images of the control object(s), analyze the images of the at control object(s) to recognize a plurality of user gestures; determine a dominant gesture; and display an action on the device based on the dominant gesture.
The image analyzer may be further configured to determine the dominant gesture by filtering the plurality of gestures (e.g., iteratively), and/or to represent each of the gestures as a trajectory (e.g., as a vector along six Euler degrees of freedom in Euler space, whose largest magnitude may be determined by the dominant gesture). The image analyzer may be further configured to determine a position or motion of the at least one control object relative to a virtual control construct.
Reference throughout this specification to “one example,” “an example,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of the present technology. Thus, the occurrences of the phrases “in one example,” “in an example,” “one embodiment,” or “an embodiment” in various places throughout this specification are not necessarily all referring to the same example. Furthermore, the particular features, structures, routines, steps, or characteristics may be combined in any suitable manner in one or more examples of the technology. The headings provided herein are for convenience only and are not intended to limit or interpret the scope or meaning of the claimed technology.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles disclosed herein. In the following description, various embodiments are described with reference to the following drawings, in which:
FIG.1A depicts an exemplary scenario for gesture-based control of an electronic device in accordance with an embodiment;
FIG.1B is a flow chart illustrating a method for machine control in accordance with an embodiment;
FIG.2 illustrates the simultaneous execution of multiple gestures in accordance with an embodiment;
FIGS.3A and3B depict on-screen indicators reflecting a degree of completion of the user's gesture in accordance with an embodiment;
FIG.3C is a flow chart illustrating a method of predicting when the virtual object is selected by a user and subsequently timely manipulating the selected object in accordance with an embodiment;
FIGS.4A and4B illustrate a dynamic adjustment of a relationship between the user's actual movements and the resulting action displayed on the screen in accordance with an embodiment; and
FIG.4C is a flow chart illustrating a method of dynamically adjusting the relationship between a user's actual motion and the resulting object movement displayed on the electronic device's screen in accordance with an embodiment.
FIGS.5A and5B are perspective views of a planar virtual surface construct and a control object in the disengaged and engaged modes, respectively, illustrating free-space gesture control of a desktop computer in accordance with various embodiments;
FIG.5C-1 is a perspective view of a tablet connected to a motion-capture device, illustrating free-space gesture control of the tablet in accordance with various embodiments;
FIG.5C-2 is a perspective view of a tablet incorporating a motion-capture device, illustrating free-space gesture control of the tablet in accordance with various embodiments;
FIG.5D is a perspective view of a curved virtual surface construct accommodating free-space gesture control of a multi-screen computer system in accordance with various embodiments;
FIG.6 illustrates motion of a virtual surface construct relative to a user's finger in accordance with various embodiments;
FIGS.7A and7B are plots of a virtual energy potential and its derivative, respectively, in accordance with various embodiments for updating the position of a virtual surface construct;
FIGS.7C-7E are plots of alternative virtual energy potentials in accordance with various embodiments for updating the position of a virtual surface construct;
FIGS.8A,8B, and8B-1 are flow charts illustrating methods for machine and/or user interface control in accordance with various embodiments;
FIG.9A is a schematic diagram of a system for capturing image data and tracking a control object based thereon in accordance with various embodiments;
FIG.9B is a block diagram of a computer system for gesture recognition and machine control in accordance with various embodiments;
FIGS.10A-10D illustrate a free-space compound gesture in accordance with various embodiments;
FIGS.11A and11B illustrate, in two snap shots, a zooming action performed by a user via a free-space gesture in accordance with various embodiments;
FIGS.12A and12B illustrate, in two snap shots, a swiping action performed by a user via a free-space gesture in accordance with various embodiments; and
FIGS.13A and13B illustrate, in two snap shots, a drawing action performed by a user via free-space hand motions in accordance with various embodiments.
DETAILED DESCRIPTION
System and methods in accordance herewith generally utilize information about the motion of a control object, such as a user's finger or a stylus, in three-dimensional space to operate a user interface and/or components thereof based on the motion information. A “control object” as used herein with reference to an embodiment is generally any three-dimensionally movable object or appendage with an associated position and/or orientation (e.g., the orientation of its longest axis) suitable for pointing at a certain location and/or in a certain direction. Control objects include, e.g., hands, fingers, feet, or other anatomical parts, as well as inanimate objects such as pens, styluses, handheld controls, portions thereof, and/or combinations thereof. Where a specific type of control object, such as the user's finger, is used hereinafter for ease of illustration, it is to be understood that, unless otherwise indicated or clear from context, any other type of control object may be used as well.
Various embodiments take advantage of motion-capture technology to track the motions of the control object in real time (or near real time, i.e., sufficiently fast that any residual lag between the control object and the system's response is unnoticeable or practically insignificant). Other embodiments may use synthetic motion data (e.g., generated by a computer game) or stored motion data (e.g., previously captured or generated). References to motions in “free space” or “touchless” motions are used herein with reference to an embodiment to distinguish motions tied to and/or requiring physical contact of the moving object with a physical surface to effect input; however, in some applications, the control object may contact a physical surface ancillary to providing input, in such case the motion is still considered a “free-space” motion. Further, in some embodiments, the motion is tracked and analyzed relative to a virtual control construct, such as a virtual surface, programmatically defined in space and not necessarily corresponding to a physical surface or object; intersection of the control object with that virtual control construct defines a “virtual touch.” The virtual surface may, in some instances, be defined to co-reside with or be placed near a physical surface (e.g., a virtual touch screen may be created by defining a (substantially planar) virtual surface at or very near the screen of a display (e.g., television, monitor, or the like); or a virtual active table top may be created by defining a (substantially planar) virtual surface at or very near a table top convenient to the machine receiving the input).
FIG.1A illustrates a gesture-recognition scenario in accordance herewith. Auser100 interacts, via hand motions (or motions of another control object102), with anelectronic device104 and associateddisplay106. The user's gestures are captured by suitable motion-capture hardware108, which may, for instance, include one or more cameras that acquire a stream of images of the hand within a camera field of view. A system110 for gesture-based machine control, implemented, e.g., on a computer, may analyze the image stream to infer four-dimensional information about the three-dimensional shape, configuration, position, and orientation of the hand102 (or other control object) and their evolution in time, and compute suitable control signals to theelectronic device104 based thereon. Meaningful control input thus detected generally causes a response action by thedevice104 that is, typically, visually represented on thedisplay106. For example, the user may, via the gestures, manipulate controls or othervirtual objects112, such as prototypes/models, blocks, spheres, or other shapes, buttons, levers, cursors or other controls, in a virtual environment displayed on the device'sscreen106, thereby remote interacting with the user interface of thedevice104. Alternatively or additionally, the position and shape of the user's hand may be reconstructed and reproduced on thedisplay screen106.
In more detail, the system110 may include an image-analysis module114 that reconstructs the shapes and positions of the user's hand in 3D space and in real time; suitable systems and methods are described, e.g., U.S. Ser. Nos. 61/587,554, 13/414,485, and 61/724,091, filed on Jan. 17, 2012, Mar. 7, 2012, and Nov. 8, 2012, respectively, the entire disclosures of which are hereby incorporated by reference. Based on the reconstructed shape, configuration, position, and orientation of the control object as a function of time, object and motion attributes may be derived. For example, the configuration of the user's hand (or other control object) may be characterized by a three-dimensional surface model or simply the position of a few key points (e.g., the finger tips) or other key parameters; and the trajectory of a gesture may be characterized with one or more vectors and/or scaling parameters (e.g., a normalized vector from the start to the end point of the motion, a parameter indicating the overall scale of the motion, and a parameter indicating any rotation of the control object during the motion). Other parameters that can be associated with gesture primitives include an acceleration, a deceleration, a velocity, a rotational velocity, rotational acceleration, other parameters of motion, parameters of appearance of the control object such as color, apparent surface texture, temperature, other qualities or quantities capable of being sensed and/or various combinations thereof. In some embodiments, the raw motion data is filtered prior to ascertaining motion attributes, e.g., in order to eliminate unintended jitter.
A gesture-recognition module116 takes the object and motion attributes, or other information from the image-analysis module, as input to identify gestures. In one embodiment, the gesture-recognition module116 compares attributes of motion or character detected from imaging or sensing a control object to gestures of a library of gesture templates electronically stored in a database120 (e.g., a relational database, an object-oriented database, or any other kind of database), which is implemented in the system110, theelectronic device104, or on an external storage system. (As used herein, the term “electronically stored” includes storage in volatile or non-volatile storage, the latter including disks, Flash memory, etc., and extends to any computationally addressable storage media (including, for example, optical storage).) For example, gesture primitives may be stored as vectors, i.e., mathematically specified spatial trajectories, and the gesture information recorded may include the relevant part of the user's body making the gesture; thus, similar trajectories executed by a user's hand and head may be stored in the database as different gestures, so that an application can interpret them differently. In one embodiment, one or more components of trajectory information about a sensed gesture—and potentially other gesture primitives—are mathematically compared against the stored trajectories to find potential matches from which a best match (or best matches) may be selected, and the gesture is recognized as corresponding to the located database entry based upon qualitative, statistical confidence factors or other quantitative criteria indicating a degree of match. For example, a confidence factor that exceeds a threshold can indicate a potential match.
Accordingly, as illustrated inFIG.1B, a method of controlling a machine may involve sensing a variation of position of one or more control objects, e.g., by processing images acquired by motion-capture hardware108 with an image-analysis module104 (150). From the sensed variation, one or more primitives describing a motion and/or the character of the control object(s) may be determined (152), and the primitives may then be compared against one or more templates of a library (e.g., stored in a database120) of gesture templates (154). From the result of the comparison, a set of templates of possible gestures corresponding to one or more primitives may be selected (156), and the selected set of templates may be provided as an indication of a command to be issued to a machine under control (such as, e.g., device104) (158).
One technique for comparison (154) comprises dynamic time warping in which an observed trajectory information is temporally distorted and the distortions compared against stored gesture information (in a database for example). One type of distortion comprises frequency distortion in which the trajectory information is distorted for frequencies of motions to yield a set of distorted trajectories. The set of distorted trajectories can be searched for matches in the database. Such frequency distortions enable finding gestures made at different frequencies of motion than the template or templates stored in the database.
Another technique employs Fourier analysis to disassemble a portion of a trajectory (viewed as a signal over time) into frequency components. The set of frequencies can be searched for among the template(s) stored in the database.
A further technique employs wavelet analysis to disassemble a portion of a trajectory (viewed as a signal over time) into time dependent frequency components. The set of frequencies can be searched for among the template(s) stored in the database.
In a yet further embodiment, Gaussian (or other) functions can be fit to the set of frequencies representing the trajectory portion to form a set of Gaussian functions at the frequencies of the trajectory. The functions can be cepstra envelopes in some embodiments. The functions fit to the frequencies can be searched for among the template(s) stored in the database.
In a still yet further embodiments, techniques for finding similarity between two or more signal portions can facilitate locating template(s) in the database corresponding to the trajectory. For example, without limitation, correlation, convolution, sliding dot product, fixed dot product or combinations thereof can be determined from the trajectory information and one or more template(s) in the database to determine a quality of match.
Of course, frequency components may be scaled and/or shifted to facilitate finding appropriate templates in the database corresponding to the gesture(s) to be recognized. Further, in some embodiments, frequency filtering can be applied to frequency components to facilitate finding template(s) stored in the database. For example, filtering can be used to eliminate jitter from shaking hands by eliminating high frequency components from the trajectory spectrum. In an embodiment, trajectories can be smoothed by applying Frenet-Serret filtering techniques described in U.S. Provisional Application No. 61/856,976, filed on Jul. 22, 2013 and entitled “Filtering Motion Using Frenet-Serret Frames,” the entire disclosure of which is hereby incorporated herein by reference.
In brief, as is known in the art, Frenet-Serret formulas describe the kinematic properties of a particle moving along a continuous, differentiable curve in 3D space. This representation of motion is better tailored to gestural movements than the conventional Cartesian (x,y,z) representation. Accordingly, embodiments convert captured motion from Cartesian space to Frenet-Serret space by attaching Frenet-Serret references frames to a plurality of locations on the control object's path. The Frenet-Serret frame consists of (i) a tangent unit vector (T) that is tangent to the path, (ii) a normal unit vector (N) that is the derivative of T with respect to an arclength parameter of the path divided by its length, and (iii) a binomial unit vector (B) that is the cross-product of T and N. Alternatively, the tangent vector may be determined by normalizing a velocity vector if it is known at a given location on the path. These unit vectors T, N, B collectively form the orthonormal basis of the Frenet-Serret frame in 3D space. The Frenet-Serret coordinate system is constantly rotating as the object traverses the path, and so may provide a more natural coordinate system for an object's trajectory than a strictly Cartesian system.
Once converted to Frenet-Serret space, the object's motions is filtered. The filtered data may then be converted back to Cartesian space or another desired reference frame. In one embodiment, filtering includes applying a smoothing filter to a set of sequential unit vectors corresponding to the tangent, normal, and/or binomial direction of the Frenet-Serret frame. To some filters, each unit vector is specified by one scalar value per dimension (i.e., by three scalar values in 3D) and filtered separately. The smoothing filter may be applied to each set of scalar values, and the direction of the vector may thereafter be reconstructed from its filtered values, and the other two vectors of the frame at each point may be recalculated accordingly. A 3D curve interpolation method may then be applied to generate a 3D curve that passes through the points in the given order, matching the filtered Frenet-Serret frame at each point and representing the object's path of motion.
In various alternative embodiments, noise filtering may be achieved by determining the rotation between consecutive Frenet-Serret frames along the path using the Frenet-Serret formulas describing curvature and torsion. The total rotation of the Frenet-Serret frame is the combination of the rotations of each of the three Frenet vectors described by the formulas
dTds=κN,dNds=-κT+τB,anddBds=-τN,wheredds
the derivative with respect to arclength, κ is the curvature, and τ is the torsion of the curve. The two scalars κ and τ may define the curvature and torsion of a 3D curve, in that the curvature measures how sharply a curve is turning while torsion measures the extent of its twist in 3D space. Alternatively, the curvature and torsion parameters may be calculated directly from the derivative of best-fit curve functions (i.e., velocity) using, for example, the equations
κ="\[LeftBracketingBar]"v×a"\[RightBracketingBar]""\[LeftBracketingBar]"v"\[RightBracketingBar]"2andτ=(v×a)·at"\[LeftBracketingBar]"v×a"\[RightBracketingBar]"2.
The curvature and torsion parameters describing the twists and turns of the Frenet-Serret frames in 3D space may be filtered, and a smooth path depicting the object's motion may be constructed therefrom.
In some embodiments, additional filtering, modification or smoothing may be applied to the resulting path, e.g., utilizing the principles of an Euler spiral (or similar construct), to create aesthetically pleasing curves and transitions before converting the coordinates back to Cartesian coordinates. In one embodiment, the filtered Frenet-Serret path (with or without modification by, for example, application of the Euler spiral) may be used to better predict future motion of the object. By removing or reducing any noise, inconsistencies, or unintended motion in the path, the filtered path may better predict a user's intent in executing a gestural motion. The predicted future motion along the Frenet-Serret path is therefore based on past-detected motion and a kinematic estimate of the user's intent behind the motion.
Returning to the discussion of gestures stored in the database, gesture templates can comprise one or more frequencies, combinations of frequency and motion information and/or characteristics of control objects (e.g., apparent texture, color, size, combinations thereof). Templates can be created to embody one or more components from taught gestures using techniques described in U.S. Provisional Application No. 61/872,538, filed on Nov. 20, 2013 and entitled “Interactive Training Recognition of Free Space Gestures for Interface and Control,” the entire disclosure of which is hereby incorporated herein by reference. In brief, a (typically computer-implemented) gesture training system may help application developers and/or end-users to define their own gestures and/or customize gestures to their needs and preferences—in other words, to go outside the realm of pre-programmed, or “canned,” gestures. The gesture training system may interact with the user through normal language, e.g., a series of questions, to better define the action the user wants the system to be able to recognize. By answering these questions in a pre-described setup process, the user defines parameters and/or parameter ranges for the respective gesture, thereby resolving ambiguities. Advantageously, this approach affords reliable gesture recognition without the algorithmic complexity normally associated with the need for the computer to guess the answers; thus, it helps reduce software complexity and cost. In one embodiment, once the system has been trained to recognize a particular gesture or action, it may create an object (e.g., a file, data structure, etc.) for this gesture or action, facilitating recognition of the gesture or action thereafter. The object may be used by an application programming interface (API), and may be employed by both developers and non-developer users. In some embodiments, the data is shared or shareable between developers and non-developer users, facilitating collaboration and the like.
In some embodiment, gesture training is conversational, interactive, and dynamic; based on the responses the user gives, the next question, or the next parameter to be specified, may be selected. The questions may be presented to the user in visual or audio format, e.g., as text displayed on the computer screen or via speaker output. User responses may likewise be given in various modes, e.g., via text input through a keyboard, selection of graphic user-interface elements (e.g., using a mouse), voice commands, or, in some instances, via basic gestures that the system is already familiar to recognize. (For example, a “thumbs-up” or “thumbs-down” gesture may be used to answer any yes-no question.) Furthermore, as illustrated by way of example below, certain questions elicit an action—specifically, performance of an exemplary gesture (e.g., a typical gesture or the extremes of a range of gestures)—rather than a verbal response. In this case, the system may utilize, e.g., machine learning approaches, as are well-known to persons of skill in the art, to distill the relevant information from the camera images or video stream capturing the action.
In one embodiment, vector(s) or other mathematical constructs representing portions of gesture(s) may be scaled so that, for example, large and small arcs traced by a user's hand will be recognized as the same gesture (i.e., corresponding to the same database record) but the gesture recognition module will return both the identity and a value, reflecting the scaling, for the gesture. The scale may correspond to an actual gesture distance traversed in performance of the gesture, or may be normalized to some canonical distance. Comparison of a tracked motion against a gesture template stored in the library facilitates determining a degree of completion of the gesture (discussed in more detail below), and can enable some embodiments to provide increased accuracy with which detected motions are interpreted as control input.
In various embodiments, stored information about a gesture may contain an input parameter corresponding to the gesture (which may be scaled using the scaling value). If the gesture-recognition module116 is implemented as part of a specific application (such as a game or controller logic for a television), the stored gesture information may also contain an input parameter corresponding to the gesture (which may be scaled using the scaling value); in some systems where the gesture-recognition module116 is implemented as a utility available to multiple applications, this application-specific parameter is omitted: when an application invokes the gesture-recognition module116, it interprets the identified gesture according in accordance with its own programming.
In some embodiments, the gesture-recognition module116 detects more than one gesture. Referring toFIG.2, for example, the user may perform an arm-waving gesture with fingers flexing. The gesture-recognition module112 detects the waving and flexing gestures and records a wavingtrajectory200 and five flexingtrajectories202,204,206,208,210 for the five fingers. Each trajectory may be converted into a vector along, for example, six Euler degrees of freedom (x, y, z, roll, pitch and yaw) in Euler space (or other mathematical formalism describing translation and rotation in space as a time series of rotations and translations of one or more points on the object. See e.g., Wikipedia “Euler Angles” (http://en.wikipedia.org/wiki/Euler_angles). The vector with the largest magnitude represents the dominant component of the motion (e.g., waving in this case) and the rest of vectors (e.g., corresponding to finger flexing) may be ignored. In one embodiment, a vector filter that can be implemented using any of a variety of filtering techniques is applied to the multiple vectors to filter out less relevant vectors, thereby enabling the dominant vector to be identified. This process may be repetitive, iterating until one vector—the dominant component of the motion is identified. In some embodiments, a new filter is generated or initiated every time new gestures are detected. Alternatively to using simply the most prominent motion corresponding to the largest vector, gestures may be filtered based on context and/or predetermined classifications. For example, in application contexts where user input is based on subtle finger motions and configurations of the hand, such as virtual typing or manipulation of complex virtual controls, larger motions of the hand as a whole may be ignored. Thus, the user may, for instance, pace around the monitored region while gesturing, and the overall translational movement will have no effect on the input provided to the electronic device.
With renewed reference toFIG.1A, the gestures identified by the gesture-recognition module116 may be provided as input to a device and user-interface control module118, which maps them to control signals. Thecontrol module118 may be specific and/or customized to theelectronic device104 or application executed thereon, or provide standard signals via an application-programming-interface that are thereafter further interpreted by theelectronic device104. For example, thecontrol module118 may map gestures onto the control inputs available with a computer mouse (e.g., left-click, right-click, double-click, translation) or keyboard (i.e., the different keys), thus allowing mouse and/or keyboard operation to be emulated by free-space gestures. Of course, free-space gesture recognition in accordance herewith is not limited to traditional user-input actions, but facilitates defining entirely new and distinct actions (e.g., a “trigger-pulling” gesture, kicks and other gestures performed by body parts other than the hand, etc.) with associated special meanings and interpretations. Further, a gesture need not correspond to a particular discrete input, but may provide one or more input parameters along a continuum (e.g., an angle by which a virtual dial is to be rotated or a distance by which a curser is to be moved). Thecontrol module118 may also translate the gesture into a graphic representation thereof (e.g., a video stream showing the motions of a rendition of the control object102) for display on thescreen106.
Gesture recognition and/or interpretation as control input may be context-dependent, i.e., the same motion may correspond to different control inputs, even for the sameelectronic device104 under control, depending, e.g., on the application, application environment, window, or menu that is currently active; user settings and preferences; the presence or absence and the configuration or state of motion of one or more additional control objects; the motion relative to one or more virtual constructs (as discussed in detail below); and/or the recent history of control input. For example, a particular gesture performed with one hand may affect the interpretation of a gesture performed simultaneously with another hand; a finger swipe parallel to the screen may have different meanings in different operational modes as distinguished based on whether the finger pierces a virtual control surface; and a clicking gesture that normally causes selection of a virtual control may have a different effect if made during the course of a video game.
Of course, the functionality of the image-analysis module114, gesture-recognition module116, and device and user-interface control module118 may be organized, grouped, and distributed among various devices and between theelectronic device104 and the gesture-based machine-control system110 in many different ways, and the depiction ofFIG.1A is not to be understood as limiting. For example, the gesture-recognition module116 may send signals indicative of the identified gesture (and, if applicable, a scaling parameter or other parameters associated with the gesture) directly to theelectronic device104, which may implement the user-interface control functionality. That is, thedevice104 may treat the identified gesture and the scaling value as control input and assign an input parameter value (or values for multiple parameters) thereto; the input parameter(s) may then be used by applications executing on theelectronic device104, facilitating gesture-based user interactions therewith. In various embodiments, thesystem100 and thedevice104 are integrated in the same machine. For example, thedevice104 may be a general-purpose computer, and themodules114,116,118 may be implemented thereon as one or more software programs. Alternatively, part of the system's functionality may be integrated with the motion-capture hardware. A stand-alone device may, for instance, include both the cameras for capturing images and the computational facility for detecting, reconstructing, and tracking control objects based thereon, and raw data indicative of the detected motions may then be further processed and interpreted by a gesture-recognition module executing on a separate machine.
To further illustrate gesture-based machine control in accordance herewith, consider the following exemplary user interaction with an electronic device104: To initiate communication with theelectronic device104, the user may first move a hand in a repetitive or distinctive way (e.g., performing a waving hand gesture). Upon detecting and recognizing this hand gesture, the gesture-recognition module116 transmits a signal indicative thereof to theelectronic device104, which, in response, renders an appropriate display (e.g., a control panel126). The user then performs another gesture (e.g., moving her hand in an “up” or “down” direction). The gesture-recognition module116 detects and identifies the gesture and a scale associated therewith, and transmits this data to theelectronic device104; thedevice104, in turn, interprets this information as an input parameter (as if the user had pressed a button on a remote control device) indicative of a desired action, enabling the user to manipulate the data displayed on the control panel126 (such as selecting a channel of interest, adjusting the audio sound, or varying the brightness of the screen). In various embodiments, thedevice104 connects to a source of video games (e.g., a video game console or CD or web-based video game); the user can perform various gestures to remotely interact with thevirtual objects112 in the virtual environment (video game). The detected gestures and scales are provided as input parameters to the currently running game, which interprets them and takes context-appropriate action, i.e., generates screen displays responsive to the gestures.
In various embodiments, after the user successfully initiates communications with theelectronic device104 via the gesture-based machine-control system110, the system110 generates a form of feedback (e.g., visual, aural, haptic or other sensory feedback or combinations thereof) for presentation on appropriate presentation mechanism(s). In the example embodiment illustrated byFIG.1A, feedback comprises cursor122 (e.g., an arrow, circle, cross hair, or other symbol) or graphic representation124 (hereinafter also deemed encompassed with “cursor”) of the detected body part (e.g., a hand) or other control object and displays it on the device'sscreen106. In one embodiment, the system110 coherently locks the movement of thecursor122 on thescreen104 to follow the actual motion of the user's gesture. For example, when the user moves ahand102 in the upward direction, the displayedcursor122 also moves upward on thedisplay screen106 in response. As a result, the motion of thecursor122 directly maps user gestures to displayed content such that, for example, the user'shand102 and thecursor122 behave like a PC mouse and a cursor on the monitor, respectively. This allows the user to evaluate the relationship between actual physical gesture movement and the resulting actions taking place on thescreen106, e.g., movement ofvirtual objects112 displayed thereon. In mapping movements of the control object to cursor motions, the absolute position of the control object is not always important; rather, relative position and/or directions of movement may control the on-screen action (e.g., the movement of cursor122). Such directions, however, are typically (although not necessarily) measured relative to the orientation of the screen102 (e.g., such that movement to the right when facing the screen results in on-screen cursor movement to the right). Further, in some embodiments, the user can control the position of a cursor and/or other object on the screen by pointing directly at the desired screen location, e.g., with an index finger.
Thus, mapping movements of the control object to those of the cursor on-screen can be accomplished in different ways. In some embodiments, the position and orientation of the control object—e.g., a stretched-out index finger—relative to the screen are used to compute the intersection of a straight line through the axis of the finger with the screen, and a cursor symbol is displayed at the point of intersection. If the range of motion causes the intersection point to move outside the boundaries of the screen, the intersection with a (virtual) plane through the screen may be used, and the cursor motions may be re-scaled or translated, relative to the finger motions, to remain within the screen boundaries. Alternatively to extrapolating the finger towards the screen, the position of the finger (or control object) tip may be projected perpendicularly onto the screen; in this embodiment, the control object orientation may be disregarded. As will be readily apparent to one of skill in the art, many other ways of mapping the control object position and/or orientation onto a screen location may, in principle, be used; a particular mapping may be selected based on considerations such as, without limitation, the requisite amount of information about the control object, the intuitiveness of the mapping to the user, and the complexity of the computation. For example, in some embodiments, the mapping is based on intersections with or projections onto a (virtual) plane defined relative to the camera or other image-capture hardware, under the assumption that the screen is located within that plane (which is correct, at least approximately, if the camera is correctly aligned relative to the screen), whereas, in other embodiments, the screen location relative to the camera is established via explicit calibration (e.g., based on camera images including the screen).
In various embodiments, certain gestures have an associated threshold of completion that needs to be exceeded before the gesture is recognized as such; this completion requirement may serve to enhance the reliability of gesture recognition, in particular, the elimination of false positives in gesture detection. As an example, consider the selection by the user of an on-screen virtual object, using a “finger click” in free space. With reference toFIG.3A, the user may first move the displayedcursor310, via suitable hand motions or other gestures, to a screen position where it at least partially overlaps with a displayedvirtual object312 of interest. Thereafter, the user may perform another gesture, e.g., “finger clicking,” to select the desiredobject312. To label theobject312 as a user-selected object, the finger motion may be required to satisfy a predetermined threshold (e.g., 95%) of completion of the gesture; this value may be stored in thedatabase120 or implemented by the application currently running on theelectronic device316. For example, a completion of a “clicking” gesture may require the user's finger to move a distance of five centimeters; upon detecting a finger movement of one centimeter, the gesture-recognition system314 (which may include, e.g., suitable motion-capture hardware108 for acquiring images and an associated computational system110 for processing the images) recognizes the gesture by matching it to a database record, and determines a degree (in this case, 20%) of completion of the recognized gesture. In one embodiment, each gesture in the database includes multiple images or vectors each of which is associated with a degree (e.g., from 1% to 100%) of completion of the performed gesture; in other embodiments, the degree of completion is computed by interpolation or simple comparison of the observed vector to the stored vector.
The degree of completion of the performed gesture (e.g., how much the user has moved her finger or hand) may be rendered on the screen, and indeed, the assessment of gestural completion may be handled by the rendering application running on thedevice316 rather than by the gesture-recognition system314. For example, theelectronic device316 may display a hollowcircular icon318 that the rendering application gradually fills in with a color or multiple colors as the device receives simple motion (position-change) signals from the gesture-recognition system314 as the user moves a finger closer to thedevice316, while performing a clicking or “touching” gesture. The degree to which the circle is filled indicates how close the user's motion is to completing the gesture (or how far the user's finger has moved away from its original location). When the user fully performs the clicking or touching gesture, the circle is entirely filled in; this may result in, for example, labeling thevirtual object312 as a chosen object.
In some embodiments, the device temporarily displays a second indication (e.g., changing the shape, color or brightness of the indicator) to confirm the object selection. The indication of the degree of gesture completion and/or the confirming indication of object selection thus enable the user to easily predict the exact moment when the virtual object is selected; accordingly, the user can subsequently manipulate the selected object on-screen in an intuitive fashion. Although the discussion herein focuses on filling of thehollow circle318, embodiments can include virtually any type of representation displayed on the screen that can indicate the completion of the performed gesture. For example, ahollow bar320 progressively filled in by color, a gradient ofcolor322, the brightness of a color or any suitable indicator may be used to illustrate a degree of gesture completion performed by the user.
The gesture-recognition system314 detects and identifies the user's gestures based on the shapes and positions of the gesturing part of the user's body in the captured 2D images. A 3D image of the gesture can be reconstructed by analyzing the temporal correlations of the identified shapes and positions of the user's gesturing body part in consecutively acquired images. Because the reconstructed 3D image can accurately detect and recognize all types of gestures (e.g., moving a finger a distance of less than one centimeter to greater than a meter) in real time, embodiments of the gesture-recognition system314 provides high detection sensitivity as well as selectivity. In various embodiments, once the gesture is recognized and the instruction associated therewith is identified, the gesture-recognition system314 transmits signals to thedevice316 to activate an on-screen indicator displaying a degree of completion of the user's gesture. The on-screen indicator provides feedback that allows the user to control theelectronic device316 and/or manipulate the displayedvirtual objects312 using various degrees of movement. For example, the user gesture may be as large as a body length jump or as small as a finger clicking.
In one embodiment, once theobject312 is labeled as a chosen object, the gesture-recognition system314 locks theobject312 together with thecursor310 on the screen to reflect the user's subsequently performed movement. For example, when the user moves a hand in the downward direction, the displayedcursor310 and the selectedvirtual object312 also move downward together on the display screen in response. Again, this allows the user to accurately manipulate thevirtual objects312 in the virtual environment.
In another embodiment, when a virtual object is labeled as a chosen item, the user's subsequent movement is converted computationally to a simulated physical force applied to the selected object. Referring toFIG.3B, the user may, for example, first move one finger forward for a distance of one centimeter to complete the selection of thevirtual object330; this selection can be confirmed by thehollow circle332 displayed on the screen being entirely filled in. The user may then move the finger forward for another centimeter. Upon detecting such movement, the gesture-recognition system314 may convert the motion to a simulated force; the force may be converted based on a conventional physics simulation model, the degree of body movement, the mass and moving velocity of the body part, gravity, and/or any other relevant parameters. The application running on thedevice316, which generates thevirtual object330, responds to the force data by rendering the simulated behavior of thevirtual object330 under the influence of the force, e.g., as computed based on a motion model which includes the Newtonian physical principles. For example, if the user's movement is relatively small within a predetermined range (e.g., less than one centimeter) and/or relatively slow, the converted force deforms the shape of the selectedobject330; if, however, the user's movement exceeds the determined range (i.e., more than 10 centimeters) or a threshold velocity, thedevice316 treats the converted force as large enough (i.e., larger than the simulated static friction force) to move the selectedobject330. The motion of theobject330 in response to such push forces is simulated by the rendering application of thedevice316 based on the motion model; the position of the object on the screen is then updated to reflect such motion. The rendering application may take other actions with respect to thevirtual object330, e.g., stretching, bending, or operating mechanical controls over buttons, levers, hinges, handles, etc. As a result, the simulated force replicates the effect of equivalent forces in the real world and makes the interaction predictable and realistic for the user.
It should be stressed that the foregoing functional division between the gesture-recognition system314 and the rendering application running on thedevice316 is exemplary only; in some embodiments the two entities are more tightly coupled or even unified so that, rather than simply passing generic force data to the application, the gesture-recognition system314 has world knowledge of the environment as rendered on thedevice316. In this way, the gesture-recognition system314 can apply object-specific knowledge (e.g., friction forces and inertia) to the force data so that the physical effects of user movements on the rendered objects are computed directly (rather than based on generic force data generated by the gesture-recognition system314 and processed on an object-by-object basis by the device316). Moreover, in various embodiments, the motion-capture and gesture-recognition functionality is implemented on thedevice316, e.g., as a separate application that provides gesture information to the rendering application (such as a game) running on thedevice316, or, as discussed above, as a module integrated within the rendering application (e.g., a game application may be provided with suitable motion-capture and gesture-recognition functionality). The division of computational responsibility between different hardware devices as well as between hardware and software represents a design choice.
Arepresentative method350 for supporting a user's interaction with an electronic device by means of free-space gestures, and particularly to monitor the degree of gesture completion so that on-screen action can be deferred until the gesture is finished, is shown inFIG.3C. The user first initiates communications with an electronic device by performing a gesture (352). This gesture is detected by a motion-capture device and associate gesture-based machine-control system (354). The gesture-recognition module of the system compares the recognized gesture with gesture records stored in a database, both to identify the gesture and to assess, in real time, a degree of completion (356). The system then transmits signals to the electronic device (358). (As noted earlier, the degree-of-completion functionality may be implemented on the device rather than by the gesture-recognition module, with the latter system merely providing movement-tracking data.) Based on the signals, the electronic device displays an on-screen indicator reflecting a degree of completion of the user's gesture (360). If the degree of completion is above a threshold value (e.g., 95%), the electronic device and/or the virtual objects displayed on the screen are then timely manipulated by the user based on the current gesture and/or subsequently performed gestures (362,364).
Referring toFIG.4A, in one embodiment, the displayedmotion410 of theobject412 on thescreen414 is determined based on the absolute spatial displacement associated with the user's actual movement. For example, the user may first slide hishand416 to the right by one centimeter (as indicated by the arrow418). Upon detecting and recognizing this hand gesture, the gesture-recognition module transmits a signal to theelectronic device422 indicative of the movement; thedevice422 interprets this signal as an input parameter and, in response, takes action to move (i.e., to render as moving) the cursor orvirtual object412 in the same direction by, for example, one hundred pixels on thescreen414. The relationship between the user's physical movement and the rendered movement can be set by the user by, for example, altering the scaling factor stored by the gesture-recognition module (e.g., in the database) for the associated gesture. If the gesture-recognition module is integrated with a rendering application, the user can make this change with gestures. For example, the user may specify a larger on-screen movement (i.e., a movement traversing a large number of pixels) of the cursor orobject412 in response to a given hand movement. To do so, the user may first activate aratio control panel424 displayed on the screen by performing a distinct gesture. Thecontrol panel424 may be rendered, for example, as a slide bar, a circular scale, or in any other suitable form. The user subsequently performs another gesture, suited to the type of thescale control panel424, to adjust the ratio. For example, if the scale control panel is a slide bar, the user slides her finger to vary the ratio. In another embodiment, no scale control panel is displayed on the screen; the ratio is, instead, adjusted based on the user's subsequent gestures. For example, the user may increase the scale ratio by opening her first or moving her thumb and index finger apart and reduce the scale ratio by closing her first or moving her index finger towards the thumb. Although the discussion herein focuses on hand or finger gestures for purposes of illustration, embodiments can process virtually any gesture performed by any particular part of the human body. Any suitable gesture for communications between the user and the electronic device may be used.
In still other embodiments, the ratio adjustment is achieved using a conventional remote-control device, which the user controls by pushing buttons, or using a wireless device such as a tablet or smart phone. A different scaling ratio may be associated with each gesture and stored in association therewith e.g., as part of the specific gesture record in the database (i.e., the scaling ration may be local and potentially differ between gestures). Alternatively, the scaling ratio may be applicable to several or all gestures stored in the gesture database (i.e., the scaling ratio may be global and shared among several or all of the gestures).
Alternatively, the relationship between physical and on-screen movements may be determined, at least in part, based on the characteristics of the display and/or the rendered environment. For example, with reference toFIG.4B, the acquired (camera)image430 may be stored as a matrix of M×N pixels, each specifying the detected light intensity or brightness, and the (rendered) frame of the display screen of theelectronic device422 may have X×Y pixels. When the user makes a hand-waving gesture420 that results in a horizontal displacement by m pixels and a vertical displacement by n pixels in the camera images, the relative horizontal and vertical displacements are set as m/M, n/N, respectively, for scaling purposes. In response to this hand gesture, the cursor or object412 on thedisplay screen414 may be moved by x pixels horizontally and by y pixels vertically, where x and y are determined as x=m/M×X, y=n/N×Y, respectively, in the simplest case. But even to display essentially unitary (1:1) scaling adjusted for the relative sizes of the user's environment and the display screen, account is generally taken of the camera position and distance from the user, focal length, resolution of the image sensor, viewing angle, etc., and as a result the quantities x and y are multiplied by a constant that results in an essentially affine mapping from “user space” to the rendered image. Once again, the constant may be adjusted to amplify or decrease on-screen movement responsiveness. Such rendition of user interactions with thevirtual object412 on the display screen may provide the user with a realistic feeling while she moves the object in the virtual environment.
The scaling relationship between the user's actual movement and the resulting action taking place on the display screen may result in performance challenges, especially when limited space is available to the user. For example, when two family members sit together on a couch playing a video game displayed on a TV, each user's effective range of motion is limited by the presence of the other user. Accordingly, the scaling factor may be altered to reflect a restricted range of motion, so that small physical movements correspond to larger on-screen movements. This can take place automatically upon detection, by the machine-control system, of multiple adjacent users. The scaling ratio may also depend, in various embodiments, on the rendered content of the screen. For example, in a busy rendered environment with many objects, a small scaling ratio may be desired to allow the user to navigate with precision; whereas for simpler or more open environments, such as where the user pretends to throw a ball or swing a golf club and the detected action is rendered on the screen, a large scaling ratio may be preferred.
As noted above, the proper relationship between the user's movement and the corresponding motion displayed on the screen may depend on the user's position relative to the recording camera. For example, the ratio of the user's actual movement m to the pixel size Min the captured image may depend on the viewing angle of the camera as well as the distance between the camera and the user. If the viewing angle is wide or the user is at a distance far away from the camera, the detected relative movement of the user's gesture (i.e., m/M) is smaller than it would be if the viewing angle was not so wide or the user was closer to the camera. Accordingly, in the former case, the virtual object moves too little on the display in response to a gesture, whereas in the latter case the virtual object moves too far. In various embodiments, the ratio of the user's actual movement to the corresponding movement displayed on the screen is automatically coarsely adjusted based on, for example, the distance between the user and the camera (which may be tracked by ranging); this allows the user to move toward or away from the camera without disrupting the intuitive feel that the user has acquired for the relationship between actual and rendered movements.
In various embodiments, when the gesture is recognized but the detected user movement is minuscule (i.e., below a predetermined threshold), the gesture-based machine-control system switches from a low-sensitivity detection mode to a high-sensitivity mode where a 3D image of the hand gesture is accurately reconstructed based on the acquired 2D images and/or a 3D model. Because the high-sensitivity system can accurately detect small movements (e.g., less than a few millimeters) performed by a small part of the body, e.g., a finger, the ratio of the user's actual movement to the resulting movement displayed on the screen may be adjusted within a large range, for example, between 1000:1 and 1:1000.
Arepresentative method450 for a user to dynamically adjust the relationship between her actual motion and the resulting object movement displayed on the electronic device's screen in accordance with embodiments is shown inFIG.4C. First, the user initiates communications with an electronic device by performing a gesture (452). The gesture is detected and recognized by a motion-capture device and associated gesture-based machine control system (454). An instruction associated with the gesture is identified (e.g., by a gesture-recognition module of the system) by comparing the detected gesture with gestures stored in a database (456). Then, the ratio of the user's actual movement to a resulting virtual action displayed on the screen is determined based on the instruction (458). Signals indicative of the instruction are then transmitted to the electronic device (460). Finally, upon receiving the signals, the electronic device displays a virtual action on the screen based on the determined ratio and a user's subsequent movement (462).
As discussed above with respect toFIGS.1A and1nmore detail below with respect toFIGS.9A and9B, a gesture-recognition system (e.g., the system illustrated inFIG.1A, which includes motion-capture hardware108 and an associated computational system110) captures images of an object, such as ahand102, e.g., using one or more cameras; the object may be illuminated with one or morelight sources108,110. An image-analysis module114 detects the object in the images, and a gesture-recognition module116 detects a gesture made using the object. Once detected, the gesture is input to anelectronic device104, which may use the gesture in a variety of ways (such as in manipulating a virtual object). Many different kinds of gestures may be detected, however, and an application running on the electronic device may not use or need every detected gesture. The sending of the unused gestures to the application may create unnecessary complexity in the application and/or consume unnecessary bandwidth over the link between the application and the gesture-recognition system.
In one embodiment, only a subset of the gestures captured by the gesture-recognition system is sent to the application running on the electronic device. The recognized gestures may be sent from the gesture-recognition module116 to agesture filter130, as illustrated inFIG.1A, and filtered based on one or more characteristics of the gestures. Gestures that pass the criteria of thefilter130 are sent to the application, and gestures that do not pass are not sent and/or deleted. Thegesture filter130 may be implemented as a separate program module, however this is not required; the functionality of thefilter130 may be wholly or partially incorporated into the gesture-recognition module116. In various embodiments, the gesture-recognition module116 recognizes all detected gestures regardless of the settings of thefilter130 or recognizes a subset of detected gestures in accordance with the settings of thefilter130.
The characteristics of thefilter130 may be defined to suit a particular application or group of applications. In various embodiments, the features may be received from a menu interface, read from a command file or configuration file, communicated via an API, or any other similar method. Thefilter130 may include sets of preconfigured characteristics and allow a user or application to select one of the sets. Examples of filter characteristics include the path that a gesture makes (thefilter130 may pass gestures having only relatively straight paths, for example, and block gestures having curvilinear paths); the velocity of a gesture (thefilter130 may pass gestures having high velocities, for example, and block gestures having low velocities); and/or the direction of a gesture (the filter may pass gestures having left-right motions, for example, and block gestures having forward-back motions). Further filter characteristics may be based on the configuration, shape, or disposition of the object making the gesture; for example, thefilter130 may pass only gestures made using a hand pointing with a certain finger (e.g., a third finger), a hand making a fist, or an open hand. Thefilter130 may further pass only gestures made using a thumbs-up or thumbs-down gesture, for example for a voting application.
The filtering performed by thefilter130 may be implemented in accordance with any method known in the art. In one embodiment, gestures detected by the gesture-recognition module116 are assigned a set of one or more characteristics (e.g., velocity or path) and the gestures and characteristics are maintained in a data structure. Thefilter130 detects which of the assigned characteristics meet its filter characteristics and passes the gestures associated with those characteristics. The gestures that pass thefilter130 may be returned to one or more applications via an API or via a similar method. The gestures may, instead or in addition, be displayed on thedisplay106 and/or shown in a menu (for, e.g., a live teaching IF application).
As described above, the gesture-recognition module116 compares a detected motion of an object to a library of known gestures and, if there is a match, returns the matching gesture. In one embodiment, a user, programmer, application developer, or other person supplements, changes, or replaces the known gestures with user-defined gestures. If the gesture-recognition module116 recognizes a user-defined gesture, it returns the gesture to one or more programs via an API (or similar method). In one embodiment, still with reference again toFIG.1A, a gesture-settings module132 screens motions for gestures based on an input of characteristics defining a gesture and returns a set of gestures having matching characteristics.
The user-defined characteristics may include any number of a variety of different attributes of a gesture. For example, the characteristics may include a path of a gesture (e.g., relatively straight, curvilinear; circle vs. swipe); parameters of a gesture (e.g., a minimum or maximum length); spatial properties of the gesture (e.g., a region of space in which the gesture occurs); temporal properties of the gesture (e.g., a minimum or maximum duration of the gesture); and/or a velocity of the gesture (e.g., a minimum or maximum velocity). Embodiments are not limited to only these attributes, however.
A conflict between a user-defined gesture and a predetermined gesture may be resolved in any number of ways. A programmer may, for example, specify that a predetermined gesture should be ignored. In another embodiment, a user-defined gesture is given precedence over a predetermined gesture such that, if a gesture matches both, the user-defined gesture is returned.
In various embodiments, gestures are interpreted based on their location and orientation relative to a virtual control construct. A “virtual control construct” as used herein with reference to an embodiment denotes a geometric locus defined (e.g., programmatically) in space and useful in conjunction with a control object, but not corresponding to a physical object; its purpose is to discriminate between different operational modes of the control object (and/or a user-interface element controlled therewith, such as a cursor) based on whether the control object intersects the virtual control construct. The virtual control construct, in turn, may be, e.g., a virtual surface construct (a plane oriented relative to a tracked orientation of the control object or an orientation of a screen displaying the user interface) or a point along a line or line segment extending from the tip of the control object. The term “intersect” is herein used broadly with reference to an embodiment to denote any instance in which the control object, which is an extended object, has at least one point in common with the virtual control construct and, in the case of an extended virtual control construct such as a line or two-dimensional surface, is not parallel thereto. This includes “touching” as an extreme case, but typically involves that portions of the control object fall on both sides of the virtual control construct.
In an embodiment and by way of example, one or more virtual control constructs can be defined computationally (e.g., programmatically using a computer or other intelligent machinery) based upon one or more geometric constructs to facilitate determining occurrence of engagement gestures from information about one or more control objects. Virtual control constructs in an embodiment can include virtual surface constructs, virtual linear or curvilinear constructs, virtual point constructs, virtual solid constructs, and complex virtual constructs comprising combinations thereof. Virtual surface constructs can comprise one or more surfaces, e.g., a plane, curved open surface, closed surface, bounded open surface, or generally any multi-dimensional virtual surface definable in two or three dimensions. Virtual linear or curvilinear constructs can comprise any one-dimensional virtual line, curve, line segment or curve segment definable in one, two, or three dimensions. Virtual point constructs can comprise any zero-dimensional virtual point definable in one, two, or three dimensions. Virtual solids can comprise one or more solids, e.g., spheres, cylinders, cubes, or generally any three-dimensional virtual solid definable in three dimensions.
In an embodiment, an engagement target can be defined using one or more virtual construct(s) coupled with a virtual control (e.g., slider, button, rotatable knob, or any graphical user interface component) for presentation to user(s) by a presentation system (e.g., displays, 3D projections, holographic presentation devices, non-visual presentation systems such as haptics, audio, and the like, any other devices for presenting information to users, or combinations thereof). Coupling a virtual control with a virtual construct enables the control object to “aim” for, or move relative to, the virtual control—and therefore the virtual control construct. Engagement targets in an embodiment can include engagement volumes, engagement surfaces, engagement lines, engagement points, or the like, as well as complex engagement targets comprising combinations thereof. An engagement target can be associated with an application or non-application (e.g., OS, systems software, etc.) so that virtual control managers (i.e., program routines, classes, objects, etc. that manage the virtual control) can trigger differences in interpretation of engagement gestures including presence, position and/or shape of control objects, control object motions, or combinations thereof to conduct machine control.
Engagement targets can be used to determine engagement gestures by providing the capability to discriminate between engagement and non-engagement (e.g., virtual touches, moves in relation to, and/or virtual pierces) of the engagement target by the control object. Thus, the user can, for example, operate a cursor in at least two modes: a disengaged mode in which it merely indicates a position on the screen, typically without otherwise affecting the screen content; and one or more engaged modes, which allow the user to manipulate the screen content. In the engaged mode, the user may, for example, drag graphical user-interface elements (such as icons representing files or applications, controls such as scroll bars, or displayed objects) across the screen, or draw or write on a virtual canvas. Further, transient operation in the engaged mode may be interpreted as a click event. Thus, operation in the engaged mode may correspond to, or emulate, touching a touch screen or touch pad, or controlling a mouse with a mouse button held down. Different or additional operational modes may also be defined, and may go beyond the modes available with traditional contact-based user input devices. The disengaged mode may simulate contact with a virtual control, and/or a hover in which the control is selected but not actuated). Other modes useful in various embodiments include an “idle,” in which no control is selected nor virtually touched, and a “lock,” in which the last control to be engaged with remains engaged until disengaged. Yet further, hybrid modes can be created from the definitions of the foregoing modes in embodiments.
The term “cursor,” as used in this discussion, refers generally to the cursor functionality rather than the visual element; in other words, the cursor is a control element operable to select a screen position—whether or not the control element is actually displayed—and manipulate screen content via movement across the screen, i.e., changes in the selected position. The cursor need not always be visible in the engaged mode. In some instances, a cursor symbol still appears, e.g., overlaid onto another graphical element that is moved across the screen, whereas in other instances, cursor motion is implicit in the motion of other screen elements or in newly created screen content (such as a line that appears on the screen as the control object moves), obviating the need for a special symbol. In the disengaged mode, a cursor symbol is typically used to visualize the current cursor location. Alternatively or additionally, a screen element or portion presently co-located with the cursor (and thus the selected screen location) may change brightness, color, or some other property to indicate that it is being pointed at. However, in certain embodiments, the symbol or other visual indication of the cursor location may be omitted so that the user has to rely on his own observation of the control object relative to the screen to estimate the screen location pointed at. (For example, in a shooter game, the player may have the option to shoot with or without a “virtual sight” indicating a pointed-to screen location.)
In various embodiments, to trigger an engaged mode—corresponding to, e.g., touching an object or a virtual object displayed on a screen—the control object's motion toward an engagement target such as a virtual surface construct (i.e., a plane, plane portion, or other (non-planar or curved) surface computationally or programmatically defined in space, but not necessarily corresponding to any physical surface) may be tracked; the motion may be, e.g., a forward motion starting from a disengaged mode, or a backward retreating motion. When the control object reaches a spatial location corresponding to this virtual surface construct—i.e., when the control object intersects “touches” or “pierces” the virtual surface construct—the user interface (or a component thereof, such as a cursor, user-interface control, or user-interface environment) is operated in the engaged mode; as the control object retracts from the virtual surface construct, user-interface operation switches back to the disengaged mode.
In embodiments, the virtual surface construct may be fixed in space, e.g., relative to the screen; for example, it may be defined as a plane (or portion of a plane) parallel to and located several inches in front of the screen in one application, or as a curved surface defined in free space convenient to one or more users and optionally proximately to display(s) associated with one or more machines under control. The user can engage this plane while remaining at a comfortable distance from the screen (e.g., without needing to lean forward to reach the screen). The position of the plane may be adjusted by the user from time to time. In embodiments, however, the user is relieved of the need to explicitly change the plane's position; instead, the plane (or other virtual surface construct) automatically moves along with, as if tethered to, the user's control object. For example, a virtual plane may be computationally defined as perpendicular to the orientation of the control object and located a certain distance, e.g., 3-4 millimeters, in front of its tip when the control object is at rest or moving with constant velocity. As the control object moves, the plane follows it, but with a certain time lag (e.g., 0.2 second). As a result, as the control object accelerates, the distance between its tip and the virtual touch plane changes, allowing the control object, when moving towards the plane, to eventually “catch” the plane—that is, the tip of the control object to touch or pierce the plane. Alternatively, instead of being based on a fixed time lag, updates to the position of the virtual plane may be computed based on a virtual energy potential defined to accelerate the plane towards (or away from) the control object tip depending on the plane-to-tip distance, likewise allowing the control object to touch or pierce the plane. Either way, such virtual touching or piercing can be interpreted as engagement events. Further, in some embodiments, the degree of piercing (i.e., the distance beyond the plane that the control object reaches) is interpreted as an intensity level. To guide the user as she engages with or disengages from the virtual plane (or other virtual surface construct), the cursor symbol may encode the distance from the virtual surface visually, e.g., by changing in size with varying distance.
In an embodiment, once engaged, further movements of the control object may serve to move graphical components across the screen (e.g., drag an icon, shift a scroll bar, etc.), change perceived “depth” of the object to the viewer (e.g., resize and/or change shape of objects displayed on the screen in connection, alone, or coupled with other visual effects) to create perception of “pulling” objects into the foreground of the display or “pushing” objects into the background of the display, create new screen content (e.g., draw a line), or otherwise manipulate screen content until the control object disengages (e.g., by pulling away from the virtual surface, indicating disengagement with some other gesture of the control object (e.g., curling the forefinger backward); and/or with some other movement of a second control object (e.g., waving the other hand, etc.)). Advantageously, tying the virtual surface construct to the control object (e.g., the user's finger), rather than fixing it relative to the screen or other stationary objects, allows the user to consistently use the same motions and gestures to engage and manipulate screen content regardless of his precise location relative to the screen. To eliminate the inevitable jitter typically accompanying the control object's movements and which might otherwise result in switching back and forth between the modes unintentionally, the control object's movements may be filtered and the cursor position thereby stabilized. Since faster movements will generally result in more jitter, the strength of the filter may depend on the speed of motion.
In an embodiment and by way of example, as illustrated inFIGS.5A and5B, a virtual control construct implemented by avirtual plane500 may be defined in front of and substantially parallel to thescreen502 of a machine under control. When the control object504 (e.g., as shown, the user's index finger) “touches” or “pierces” the virtual plane (i.e., when its spatial location coincides with, intersects, or moves beyond the virtual plane's computationally defined spatial location), thecursor506 and/or machine interface operates in the engaged mode (FIG.5B); otherwise, the cursor and/or machine interface operates in the disengaged mode (FIG.5A). To implement two or more distinct engaged modes, multiple virtual planes may be defined. For instance, a drawing application may define two substantially parallel virtual planes at different distances from the screen. When the user, moving his finger towards the screen, pierces the first virtual plane, the user may be able to operate menus and controls within the application; when his finger pierces the second virtual plane, the finger's further (e.g., lateral) motions may be converted to line drawings on the screen. Two parallel virtual planes may also be used to, effectively, define a virtual control construct with a certain associated thickness (i.e., a “virtual slab”). Control object movements within that virtual slab may operate the cursor in the engaged mode, while movements on either side of the virtual slab correspond to the disengaged mode. A planar virtual control construct with a non-zero thickness may serve to avoid unintended engagement and disengagement resulting from inevitable small motions in and out of the virtual plane (e.g., due to the inherent instability of the user's hand and/or the user's perception of depth). The thickness may vary depending on one or more sensed parameters (e.g., the overall speed of the control object's motion; the faster the movements, the thicker the slice may be chosen to be).
Transitions between the different operational modes may, but need not, be visually indicated by a change in the shape, color (as inFIGS.5A and5B), or other visual property of the cursor or other displayable object and/or audio feedback. In some embodiments, the cursor symbol indicates not only the operational mode, but also the control object's distance from the virtual control construct. For instance, the cursor symbol may take the form of a circle, centered at the cursor location, whose radius is proportional to (or otherwise monotonically increasing with) the distance between control object and virtual control construct, and which, optionally, changes color when switching from the disengaged mode into the engaged mode.
Of course, the system under control need not be a desktop computer.FIG.5C-1 illustrates an embodiment in which free-space gestures are used to operate ahandheld tablet510. Thetablet510 may be connected, e.g., via a USB cable512 (or any other wired or wireless connection), to a motion-capture device114 (such as for example, a dual-camera motion controller as provided by Leap Motion, Inc., San Francisco, CA or other interfacing mechanisms and/or combinations thereof) that is positioned and oriented so as to monitor a region where hand motions normally take place. For example, the motion-capture device514 may be placed onto a desk or other working surface, and thetablet510 may be held at an angle to that working surface to facilitate easy viewing of the displayed content. Thetablet510 may be propped up on a tablet stand or against a wall or other suitable vertical surface to free up the second hand, facilitating two-hand gestures.FIG.5C-2 illustrates a modified tablet embodiment, in which the motion-capture device514 is integrated into the frame of thetablet510.
The virtual surface construct need not be planar, but may be curved in space, e.g., to conform to the user's range of movements.FIG.5D illustrates, for example, a cylindrical virtual surface construct520 in front of an arrangement of threemonitors522,524,526, which may all be connected to the same computer. The user's finger motions may control screen content on any one of the screens, depending on the direction in which thefinger528 points and/or the portion of the virtual surface construct520 that it pierces. Of course, other types of curved virtual surfaces constructs of regular (e.g., spherical) or irregular shape, or virtual surface constructs composed of multiple (planar or curved) segments, may also be used in combination with one or more screens. Further, in some embodiments, the virtual control construct is a virtual solid construct or a virtual closed surface (such as, e.g., a sphere, box, oriented ellipsoid, etc.) or portion thereof, having an interior (or, alternatively, exterior) that defines a three-dimensional engagement target. For instance, in an application that allows the user to manipulate a globe depicted on the screen, the virtual control construct may be a virtual sphere located at some distance in front of the screen. The user may be able to rotate the on-screen globe by moving his fingertips while they are touching or piercing the spherical virtual surface construct (from outside). To allow the user to manipulate the globe from inside, the spherical virtual surface construct may be defined as surrounding the user (or at least his hand), with its exterior serving as the engagement target. Engagement and disengagement of the control object need not necessarily be defined relative to a two-dimensional surface. Rather, in some embodiments, the virtual control construct may be a virtual point construct along a virtual line (or line segment) extending from the control object, or a line within a plane extending from the control object.
The location and/or orientation of the virtual surface construct (or other virtual control construct) may be defined relative to the room and/or stationary objects (e.g., a screen) therein, relative to the user, relative to thedevice514 or relative to some combination. For example, a planar virtual surface construct may be oriented parallel to the screen, perpendicular to the direction of the control object, or at some angle in between. The location of the virtual surface construct can, in some embodiments, be set by the user, e.g., by means of a particular gesture recognized by the motion-capture system. To give just one example, the user may, with her index finger stretched out, have her thumb and middle finger touch so as to pin the virtual surface construct at a certain location relative to the current position of the index-finger-tip. Once set in this manner, the virtual surface construct may be stationary until reset by the user via performance of the same gesture in a different location.
In some embodiments, the virtual surface construct is tied to and moves along with the control object, i.e., the position and/or orientation of the virtual surface construct are updated based on the tracked control object motion. This affords the user maximum freedom of motion by allowing the user to control the user interface from anywhere (or almost anywhere) within the space monitored by the motion-capture system. To enable the relative motion between the control object and virtual surface construct that is necessary for piercing the surface, the virtual surface construct follows the control object's movements with some delay. Thus, starting from a steady-state distance between the virtual surface construct and the control object tip in the disengaged mode, the distance generally decreases as the control object accelerates towards the virtual surface construct, and increases as the control object accelerates away from the virtual surface construct. If the control object's forward acceleration (i.e., towards the virtual surface construct) is sufficiently fast and/or prolonged, the control object eventually pierces the virtual surface construct. Once pierced, the virtual surface construct again follows the control object's movements. However, whereas, in the disengaged mode, the virtual surface construct is “pushed” ahead of the control object (i.e., is located in front of the control object tip), it is “pulled” behind the control object in the engaged mode (i.e., is located behind the control object tip). To disengage, the control object generally needs to be pulled back through the virtual surface construct with sufficient acceleration to exceed the surface's responsive movement.
In an embodiment, an engagement target can be defined as merely the point where the user touches or pierces a virtual control construct. For example, a virtual point construct may be defined along a line extending from or through the control object tip, or any other point or points on the control object, located a certain distance from the control object tip in the steady state, and moving along the line to follow the control object. The line may, e.g., be oriented in the direction of the control object's motion, perpendicularly project the control object tip onto the screen, extend in the direction of the control object's axis, or connect the control object tip to a fixed location, e.g., a point on the display screen. Irrespective of how the line and virtual point construct are defined, the control object can, when moving sufficiently fast and in a certain manner, “catch” the virtual point construct. Similarly, a virtual line construct (straight or curved) may be defined as a line within a surface intersecting the control object at its tip, e.g., as a line lying in the same plane as the control object and oriented perpendicular (or at some other non-zero angle) to the control object. Defining the virtual line construct within a surface tied to and intersecting the control object tip ensures that the control object can eventually intersect the virtual line construct.
In an embodiment, engagement targets defined by one or more virtual point constructs or virtual line (i.e., linear or curvilinear) constructs can be mapped onto engagement targets defined as virtual surface constructs, in the sense that the different mathematical descriptions are functionally equivalent. For example, a virtual point construct may correspond to the point of a virtual surface construct that is pierced by the control object (and a virtual line construct may correspond to a line in the virtual surface construct going through the virtual point construct). If the virtual point construct is defined on a line projecting the control object tip onto the screen, control object motions perpendicular to that line move the virtual point construct in a plane parallel to the screen, and if the virtual point construct is defined along a line extending in the direction of the control object's axis, control object motions perpendicular to that line move the virtual point construct in a plane perpendicular to that axis; in either case, control object motions along the line move the control object tip towards or away from the virtual point construct and, thus, the respective plane. Thus, the user's experience interacting with a virtual point construct may be little (or no) different from interacting with a virtual surface construct. Hereinafter, the description will, for ease of illustration, focus on virtual surface constructs. A person of skill in the art will appreciate, however, that the approaches, methods, and systems described can be straightforwardly modified and applied to other virtual control constructs (e.g., virtual point constructs or virtual linear/curvilinear constructs).
The position and/or orientation of the virtual surface construct (or other virtual control construct) are typically updated continuously or quasi-continuously, i.e., as often as the motion-capture system determines the control object location and/or direction (which, in visual systems, corresponds to the frame rate of image acquisition and/or image processing). However, embodiments in which the virtual surface construct is updated less frequently (e.g., only every other frame, to save computational resources) or more frequently (e.g., based on interpolations between the measured control object positions) can be provided for in embodiments.
In some embodiments, the virtual surface construct follows the control object with a fixed time lag, e.g., between 0.1 and 1.0 second. In other words, the location of the virtual surface construct is updated, for each frame, based on where the control object tip was a certain amount of time (e.g., 0.2 second) in the past. This is illustrated inFIG.6, which shows the control object and the virtual surface construct (represented as a plane) at locations within a consistent coordinate system across the subfigures for various points in time according to various embodiments. As depicted, the plane may be computationally defined as substantially perpendicular to the orientation of the control object (meaning that its normal is angled relative to the control object orientation by less than a certain small amount, e.g., less than 5°, and preferably smaller than 1°). Of course, the virtual plane need not necessarily be perpendicular to the orientation of the control object. In some embodiments, it is, instead, substantially parallel to the screen, but still dynamically positioned relative to the control object (e.g., so as to remain at a certain distance from the control object tip, where distance may be measured, e.g., in a direction perpendicular to the screen or, alternatively, in the direction of the control object).
At a first point t=t0in time, when the control object is at rest, the virtual plane is located at its steady-state distance d in front of the control object tip; this distance may be, e.g., a few millimeters. At a second point t=t1in time—after the control object has started moving towards the virtual plane, but before the lag period has passed—the virtual plane is still in the same location, but its distance from the control object tip has decreased due to the control object's movement. One lag period later, at t=t1+Δtlag, the virtual plane is positioned the steady-state distance away from the location of the control object tip at the second point in time, but due to the control object's continued forward motion, the distance between the control object tip and the virtual plane has further decreased. Finally, at a fourth point in time t=t2, the control object has pierced the virtual plane. One lag time after the control object has come to a halt, at t=t2+Δtlag, the virtual plane is again a steady-state distance away from the control object tip but now on the other side. When the control object is subsequently pulled backwards, the distance between its tip and the virtual plane decreases again (t=t3and t=t4), until the control object tip emerges at the first side of the virtual plane (t=t5). The control object may stop at a different position than where it started, and the virtual plane will eventually follow it and be, once more, a steady-state distance away from the control object tip (t=t6). Even if the control object continues moving, if it does so at a constant speed, the virtual plane will, after an initial lag period to “catch up,” follow the control object at a constant distance.
The steady-state distances in the disengaged mode and the engaged mode may, but need not be the same. In some embodiments, for instance, the steady-state distance in the engaged mode is larger, such that disengaging from the virtual plane (i.e., “unclicking”) appears harder to the user than engaging (i.e., “clicking”) because it requires a larger motion. Alternatively or additionally, to achieve a similar result, the lag times may differ between the engaged and disengaged modes. Further, in some embodiments, the steady-state distance is not fixed, but adjustable based on the control object's speed of motion, generally being greater for higher control object speeds. As a result, when the control object moves very fast, motions toward the plane are “buffered” by the rather long distance that the control object has to traverse relative to the virtual plane before an engagement event is recognized (and, similarly, backwards motions for disengagement are buffered by a long disengagement steady-state distance). A similar effect can also be achieved by decreasing the lag time, i.e., increasing the responsiveness of touch-surface position updates, as the control object speed increases. Such speed-based adjustments may serve to avoid undesired switching between the modes that may otherwise be incidental to fast control object movements.
In various embodiments, the position of the virtual plane (or other virtual surface construct) is updated not based on a time lag, but based on its current distance from the control object tip. That is, for any image frame, the distance between the current control object tip position and the virtual plane is computed (e.g., with the virtual-plane position being taken from the previous frame), and, based thereon, a displacement or shift to be applied to the virtual plane is determined. In some embodiments, the update rate as a function of distance may be defined in terms of a virtual “potential-energy surface” or “potential-energy curve.” InFIG.7A, an exemplary such potential-energy curve700 is plotted as a function of the distance of the virtual plane from the control object tip according to various embodiments. The negative derivative702 (or slope) of this curve, which specifies the update rate, i.e., the shift in the virtual plane's position per frame (in arbitrary units), is shown inFIG.7B. The minima of the potential-energy curve700 determine the steady-state distances704,706 to both sides of the control object; at these distances, the virtual plane is not updated at all. At larger distances, the virtual plane is attracted towards the control object tip, at a rate that generally increases with distance. For example, atpoint708, where the virtual plane is a positive distance d1away from the control object, a negative displacement or shift Ds1is applied to bring the virtual plane closer. Conversely, atpoint710, where the virtual plane has a negative distance d2from the control object tip (corresponding to piercing of the virtual plane, i.e., the engaged mode), a positive shift Ds2is applied to move the virtual plane closer to the control object. At distances below the steady-state distance (e.g., at point712), the virtual plane is repelled by the control object and driven back towards the steady state. The magnitude of thelocal maximum714 between the two steady states determines the level of force or acceleration needed to cross from the disengaged to the engaged mode or back. In certain embodiments, the potential-energy curve700 is given an even more physical interpretation, and its negative slope is associated with an acceleration, i.e., a change in the velocity of the virtual plane, rather than a change in its position. In this case, the virtual plane does not immediately stop as it reaches a steady state, but oscillates around the steady state. To slow down the virtual plane's motion and thereby stabilize its position, a friction term may be introduced into the physical model.
The potential-energy curve need not be symmetric, or course.FIG.7C, for example, shows an asymmetric curve in which the steady-state distance in the engaged mode is larger than that in the disengaged mode, rendering disengagement harder. Further, as illustrated inFIG.7D, the curve may have more than two (e.g., four)steady states720, which may correspond to one disengaged and three engaged modes. The requisite force to transition between modes depends, again, on the heights of thelocal maxima722 between the steady states. In some embodiments, the curve abruptly jumps at the steady-state points and assumes a constant, higher value therebetween. In this case, which is illustrated inFIG.7E, the position of the virtual plane is not updated whenever the control object tip is within the steady-state distance from the virtual plane on either side, allowing fast transitions between the modes. Accordingly, the potential-energy curve may take many other forms, which may be tailored to a desired engagement-disengagement force profile experienced by the user. Moreover, the virtual plane may be updated in accordance with a two-dimensional potential-energy surface that defines the update rate depending on, e.g., the distances between the virtual plane and control object tip along various directions (as opposed to only one, e.g., the perpendicular and shortest, distance of the control object tip from the virtual plane). For example, the virtual plane may follow the control object differently for different relative orientations between the control object and the virtual plane, and each such relative orientation may correspond to a cross-section through the potential-energy surface. Two-dimensional potential-energy surfaces may also be useful to control position updates applied to a curved virtual surface construct.
Furthermore, the potential piercing energy need not, or not only, be a function of the distance from the control object tip to the virtual surface construct, but may depend on other factors. For example, in some embodiments, a stylus with a pressure-sensitive grip is used as the control object. In this case, the pressure with which the user squeezes the stylus may be mapped to the piercing energy.
Whichever way the virtual surface construct is updated, jitter in the control object's motions may result in unintentional transitions between the engaged and disengaged modes. While such modal instability may be combatted by increasing the steady-state distance (i.e., the “buffer zone” between control object and virtual surface construct), this comes at the cost of requiring the user, when she intends to switch modes, to perform larger movements that may feel unnatural. The trade-off between modal stability and user convenience may be improved by filtering the tracked control object movements. Specifically, jitter may be filtered out, based on the generally more frequent changes in direction associated with it, with some form of time averaging. Accordingly, in one embodiment, a moving-average filter spanning, e.g., a few frames, is applied to the tracked movements, such that only a net movement within each time window is used as input for cursor control. Since jitter generally increases with faster movements, the time-averaging window may be chosen to likewise increase as a function of control object velocity (such as a function of overall control object speed or of a velocity component, e.g., perpendicular to the virtual plane). In another embodiment, the control object's previous and newly measured position are averaged with weighting factors that depend, e.g., on velocity, frame rate, and/or other factors. For example, the old and new positions may be weighted with multipliers of x and (1−x), respectively, where x varies between 0 and land increases with velocity. In one extreme, for x=1, the cursor remains completely still, whereas for the other extreme, x=0, no filtering is performed at all.
FIG.8A summarizes representative methods for control-object-controlled cursor operation that utilize a virtual surface construct moving with the control object in accordance with various embodiments. In the method embodiment illustrated byFIG.8A, a control object is tracked (800), based on computer vision or otherwise, to determine its position and/or orientation in space (typically within a detection zone proximate to the computer screen). Optionally, the tracked control object motion is computationally filtered to reduce jitter (802). Based on the tracked control object in conjunction with a definition of the virtual surface construct relative thereto, the position and/or orientation of the virtual surface construct are then computed (804). In embodiments where the virtual surface construct is updated based on a control object position in the past, it may initially take a few control object tracking cycles (e.g., frames in image-based tracking) before the first position of the virtual surface construct is established; thereafter, the virtual surface construct can be updated every cycle. In embodiments where the virtual surface construct is shifted from cycle to cycle based on its instantaneous distance from the control object tip, the position of the virtual surface construct may be initiated arbitrarily, e.g., such that the virtual surface construct starts a steady-state distance away from the control object. Following computation of the virtual surface construct, the current operational mode (engaged or disengaged) is identified based on a determination whether the control object touches or pierces the virtual surface construct or not (806). Further, the current cursor position is calculated, typically from the control object's position and orientation relative to the screen (808). (This step may be performed prior to, or in parallel with, the computations of the virtual surface construct.) Based on the operational mode and cursor position, the screen content is then updated (810), e.g., to move the cursor symbol or re-arrange other screen content. Steps800-810 are executed in a loop as long as the user interacts with the system via free-space control object motions.
In some embodiments, temporary piercing of the virtual surface construct—i.e., a clicking motion including penetration of the virtual surface construct immediately followed by withdrawal from the virtual surface construct—switches between modes and locks in the new mode. For example, starting in the disengaged mode, a first click event may switch the control object into the engaged mode, where it may then remain until the virtual surface construct is clicked at again.
Further, in some embodiments, the degree of piercing (i.e., the distance beyond the virtual surface construct that the control object initially reaches, before the virtual surface construct catches up) is interpreted as an intensity level that can be used to refine the control input. For example, the intensity (of engagement) in a swiping gesture for scrolling through screen content may determine the speed of scrolling. Further, in a gaming environment or other virtual world, different intensity levels when touching a virtual object (by penetrating the virtual surface construct while the cursor is positioned on the object as displayed on the screen) may correspond to merely touching the object versus pushing the object over. As another example, when hitting the keys of a virtual piano displayed on the screen, the intensity level may translate into the volume of the sound created. Thus, touching or engagement of a virtual surface construct (or other virtual control construct) may provide user input beyond the binary discrimination between engaged and disengaged modes.
FIGS.8B and8B-1 illustrate at a higher conceptual level various methods for controlling a machine-user interface using free-space gestures or motions performed by a control object. The method involves receiving information including motion information for a control object (820). Further, it includes determining from the motion information whether the motion corresponds to an engagement gesture (822). This determination may be made by determining whether an intersection occurred between the control object and a virtual control construct (824); whether a dis-intersection of the control object from the at least one virtual control construct occurred (826); and/or whether motion of the control object occurred relative to at least one virtual control construct (828). Further, the determination may involve determining, from the motion information, one or more engagement attributes (e.g., a potential energy) defining an engagement gesture (830), and/or identifying an engagement gesture by correlating the motion information to one of a plurality of engagement gestures based in part upon one or more of motion of the control object, occurrence of any of an intersection, a dis-intersection or a non-intersection of the control object with the virtual control construct, and the set of engagement attributes (832). Once an engagement gesture has been recognized, the user-interface control to which the gesture applies (e.g., a control associated with an application or an operating environment, or a special control) is selected or otherwise determined (834). The control may then be manipulated according to the gesture (836).
As will be readily apparent to those of skill in the art, the methods described above can be readily extended to the control of a user interface with multiple simultaneously tracked control objects. For instance, both left and right index fingers of a user may be tracked, each relative to its own associated virtual touch surface, to operate to cursors simultaneously and independently. As another example, the user's hand may be tracked to determine the positions and orientations of all fingers; each finger may have its own associated virtual surface construct (or other virtual control construct) or, alternatively, all fingers may share the same virtual surface construct, which may follow the overall hand motions. A joint virtual plane may serve, e.g., as a virtual drawing canvas on which multiple lines can be drawn by the fingers at once.
In an embodiment and by way of example, one or more control parameter(s) and the control object are applied to some control mechanism to determine the distance of the virtual control construct to a portion of the control object (e.g., tool tip(s), point(s) of interest on a user's hand or other points of interest). In some embodiments, a lag (e.g., filter or filtering function) is introduced to delay, or modify, application of the control mechanism according to a variable or a fixed increment of time, for example. Accordingly, embodiments can provide enhanced verisimilitude to the human-machine interaction, and/or increased fidelity of tracking control object(s) and/or control object portion(s).
In one example, the control object portion is a user's finger-tip. A control parameter is also the user's finger-tip. A control mechanism includes equating a plane-distance between virtual control construct and finger-tip to a distance between finger-tip and an arbitrary coordinate (e.g., center (or origin) of an interaction zone of the controller). Accordingly, the closer the finger-tip approaches to the arbitrary coordinate, the closer the virtual control construct approaches the finger-tip.
In another example, the control object is a hand, which includes a control object portion, e.g., a palm, determined by a “palm-point” or center of mass of the entire hand. A control parameter includes a velocity of the hand, as measured at the control object portion, i.e., the center of mass of the hand. A control mechanism includes filtering forward velocity over the last one (1) second. Accordingly, the faster the palm has recently been travelling forward, the closer the virtual control construct approaches to the control object (i.e., the hand).
In a further example, a control object includes a control object portion (e.g., a finger-tip). A control mechanism includes determining a distance between a thumb-tip (e.g., a first control object portion) and an index finger (e.g., a second control object portion). This distance can be used as a control parameter. Accordingly, the closer the thumb-tip and index-finger, the closer the virtual control construct is determined to be to the index finger. When the thumb-tip and index finger touch one another, the virtual control construct is determined to be partially pierced by the index finger. A lag (e.g., filter or filtering function) can introduce a delay in the application of the control mechanism by some time-increment proportional to any quantity of interest, for example horizontal jitter (i.e., the random motion of the control object in a substantially horizontal dimension). Accordingly, the greater the shake in a user's hand, the more lag will be introduced into the control mechanism.
Machine and user-interface control via free-space motions relies generally on a suitable motion-capture device or system for tracking the positions, orientations, and motions of one or more control objects. For a description of tracking positions, orientations, and motions of control objects, reference may be had to U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. In various embodiments, motion capture can be accomplished visually, based on a temporal sequence of images of the control object (or a larger object of interest including the control object, such as the user's hand) captured by one or more cameras. In one embodiment, images acquired from two (or more) vantage points are used to define tangent lines to the surface of the object and approximate the location and shape of the object based thereon, as explained in more detail below. Other vision-based approaches that can be used in embodiments include, without limitation, stereo imaging, detection of patterned light projected onto the object, or the use of sensors and markers attached to or worn by the object (such as, e.g., markers integrated into a glove) and/or combinations thereof. Alternatively or additionally, the control object may be tracked acoustically or ultrasonically, or using inertial sensors such as accelerometers, gyroscopes, and/or magnetometers (e.g., MEMS sensors) attached to or embedded within the control object. Embodiments can be built employing one or more of particular motion-tracking approaches that provide control object position and/or orientation (and/or derivatives thereof) tracking with sufficient accuracy, precision, and responsiveness for the particular application.
FIGS.9A and9B illustrate an exemplary system for capturing images and controlling a machine based on motions of a control object according to various embodiments. As shown inFIG.9A, the system includes motion-capture hardware including twovideo cameras900,902 that acquire a stream of images of a region ofinterest904 from two different vantage points. Thecameras900,902 are connected to acomputer906 that processes these images to infer three-dimensional information about the position and orientation of acontrol object908, or a larger object of interest including the control object (e.g., a user's hand), in the region ofinterest904, and computes suitable control signals to the user interface based thereon. The cameras may be, e.g., CCD or CMOS cameras, and may operate, e.g., in the visible, infrared (IR), or ultraviolet wavelength regime, either by virtue of the intrinsic sensitivity of their sensors primarily to these wavelengths, or due toappropriate filters910 placed in front of the cameras. In some embodiments, the motion-capture hardware includes, co-located with thecameras900,902, one or morelight sources912 that illuminate the region ofinterest904 at wavelengths matching the wavelength regime of thecameras900,902. For example, thelight sources912 may be LEDs that emit IR light, and thecameras900,902 may capture IR light that is reflected off the control object and/or objects in the background. Due to the inverse-square dependence of the illumination intensity on the distance between thelight sources912 and the illuminated object, foreground objects such as the control object generally appear significantly brighter in the images than background objects, aiding in intensity-based foreground/background discrimination. In some embodiments, thecameras900,902 andlight sources912 are disposed below the control object to be tracked and point upward. For example, they may be placed on a desk to capture hand motions taking place in a spatial region above the desk, e.g., in front of the screen. This location may be optimal both for foreground/background discrimination (because the background is in this case typically the ceiling and, thus, far away) and for discerning the control object's direction and tip position (because the usual pointing direction will lie, more or less, in the image plane).
As mentioned above, the control object may, alternatively, be tracked acoustically. In this case, thelight sources900,902 are replaced by sonic sources. The sonic sources transmit sound waves (e.g., ultrasound that is not audible by the user) to the user; the user either blocks or alters the sound waves that impinge upon her, i.e., causes “sonic shadowing” or “sonic deflection.” Such sonic shadows and/or deflections can also be sensed and analyzed to reconstruct the shape, configuration, position, and orientation of the control object, and, based thereon, detect the user's gestures.
Thecomputer906 processing the images acquired by thecameras900,902 may be a suitably programmed general-purpose computer. As shown inFIG.9B, it may include a processor (or CPU)920, associated system memory922 (typically volatile memory, e.g., RAM), one or more permanent storage devices924 (such as hard disks, CDs, DVDs, memory keys, etc.), a display screen926 (e.g., an LCD screen or CRT monitor), input devices (such as a keyboard and, optionally, a mouse)928, and asystem bus930 that facilitates communication between these components and, optionally via a dedicated interface, with thecameras900,902 and/or other motion-capture hardware. Thememory922 may store computer-executable instructions, conceptually illustrated as a group of modules and programmed in any of various suitable programming languages (such as, e.g., C, C++, Java, Basic, Python, Pascal, Fortran, assembler languages, etc.), that control the operation of the CPU and provide the requisite computational functionality for implementing methods in accordance herewith. One of these modules is typically anoperating system932, such as Microsoft WINDOWS operating system, the Unix operating system, the Linux operating system, the Xenix operating system, the IBM AIX operating system, the Hewlett Packard UX operating system, the Novell NETWARE operating system, the Sun Microsystems SOLARIS operating system, the OS/2 operating system, the BeOS operating system, the MACINTOSH operating system, the APACHE operating system, an OPENSTEP operating system, iOS and Android mobile operating systems, or another operating system of platform. In addition to theoperating system932, which stores low-level system functions (such as memory allocation and file management), the modules may include one or more end-user applications934 (such as, e.g., web browsers, office applications, or video games), and modules for image processing/analysis and control-object tracking, gesture recognition, computation of the virtual control construct and determination of the operational mode, and cursor operation and user-interface control.
In one embodiment, animage analysis module936 may analyze pairs of image frames acquired by the twocameras900,902 (and stored, e.g., in image buffers in memory922) to identify the control object (or an object including the control object or multiple control objects, such as a user's hand) therein (e.g., as a non-stationary foreground object) and detect its edges. Next, themodule936 may, for each pair of corresponding rows in the two images, find an approximate cross-section of the control object by defining tangent lines on the control object that extend from the vantage points (i.e., the cameras) to the respective edge points of the control object, and inscribe an ellipse (or other geometric shape defined by only a few parameters) therein. The cross-sections may then be computationally connected in a manner that is consistent with certain heuristics and known properties of the control object (e.g., the requirement of a smooth surface) and resolves any ambiguities in the fitted ellipse parameters. As a result, the control object is reconstructed or modeled in three dimensions. This method, and systems for its implementation, are described in more detail in U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. A larger object including multiple control objects can similarly be reconstructed with respective tangent lines and fitted ellipses, typically exploiting information of internal constraints of the object (such as a maximum physical separation between the fingertips of one hand). The image-analysis module934 may, further, extract relevant control object parameters, such as tip positions and orientations as well as velocities, from the three-dimensional model. In some embodiments, this information can be inferred from the images at a lower level, prior to or without the need for fully reconstructing the control object. These operations are readily implemented by those skilled in the art without undue experimentation. In some embodiments, afilter module938 receives input from the image-analysis module964, and smoothens or averages the tracked control object motions; the degree of smoothing or averaging may depend on a control object velocity as determined by the image-analysis module936.
A gesture-recognition module940 may receive the tracking data about the control object from the image-analysis module936 (or, after filtering, from the filter module938), and use it to identify gestures, e.g., by comparison with gesture records stored in a database941 on thepermanent storage devices924 and/or loaded intosystem memory922. The gesture-recognition module may also include, e.g., as sub-modules, agesture filter942 that provides the functionality for ascertaining a dominant gesture among multiple simultaneously detected gestures, and acompletion tracker943 that determines a degree of completion of the gesture as the gesture is being performed.
An engagement-target module944 may likewise receive data about the control object's location and/or orientation from the image-analysis module936 and/or thefilter module938, and use the data to compute a representation of the virtual control construct, i.e., to define and/or update the position and orientation of the virtual control construct relative to the control object (and/or the screen); the representation may be stored in memory in any suitable mathematical form. A touch-detection module945 in communication with the engagement-target module944 may determine, for each frame, whether the control object touches or pierces the virtual control construct.
A user-interface control module946 may map detected motions in the engaged mode into control input for theapplications934 running on thecomputer906. Collectively, the end-user application934 and the user-interface control module946 may compute the screen content, i.e., an image for display on thescreen526, which may be stored in a display buffer (e.g., inmemory922 or in the buffer of a GPU included in the system). In particular, the user-interface control module946 may include a cursor (sub)module947 that determines a cursor location on the screen based on tracking data from the image-analysis module936 (e.g., by computationally projecting the control object tip onto the screen), and visualizes the cursor at the computed location, optionally in a way that discriminates, based on output from the touch-detection module945, between the engaged and disengaged mode (e.g., by using different colors). The cursor module947 may also modify the cursor appearance based on the control object distance from the virtual control construct; for instance, the cursor may take the form of a circle having a radius proportional to the distance between the control object tip and the virtual control construct. Further, the user-interface control module946 may include completion-indicator (sub)module948, which depicts the degree of completion of a gesture, as determined by thecompletion tracker943, with a suitable indicator (e.g., a partially filled circle). Additionally, the user-interface control module946 may include a scaling (sub)module949 that determines the scaling ratio between actual control-object movements and on-screen movements (e.g., based on direct user input via a scale-control panel) and causes adjustments to the displayed content based thereon.
The functionality of the different modules can, of course, be grouped and organized in many different ways, as a person of skill in the art would readily understand. Further, it need not necessarily be implemented on a single computer, but may be distributed between multiple computers. For example, the image-analysis and gesture-recognition functionality provided bymodules936,938,940,944,945, and optionally also the user-interface control functionality ofmodule946, may be implemented by a separate computer in communication with the computer on which the end-user applications934 controlled via free-space control object motions are executed, and/or integrated with thecameras900,902 andlight sources912 into a single motion-capture device (which, typically, utilizes an application-specific integrated circuit (ASIC) or other special-purpose computer for image-processing). In another exemplary embodiment, the camera images are sent from a client terminal over a network to a remote server computer for processing, and the tracked control object positions and orientations are sent back to the client terminal as input into the user interface. Embodiments can be realized using any number and arrangement of computers (broadly understood to include any kind of general-purpose or special-purpose processing device, including, e.g., microcontrollers, ASICs, programmable gate arrays (PGAs), or digital signal processors (DSPs) and associated peripherals) executing the methods described herein, and any implementation of the various functional modules in hardware, software, or a combination thereof.
Computer programs incorporating various features or functionality described herein may be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and any other non-transitory medium capable of holding data in a computer-readable form. Computer-readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices. In addition, program code may be encoded and transmitted via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download and/or provided on-demand as web-services.
The systems and methods described herein may find application in a variety of computer-user-interface contexts, and may replace mouse operation or other traditional means of user input as well as provide new user-input modalities. Free-space control object motions and virtual-touch recognition may be used, for example, to provide input to commercial and industrial legacy applications (such as, e.g., business applications, including Microsoft Outlook™, office software, including Microsoft Office™, Windows™, Excel™, etc.; graphic design programs; including Microsoft Visio™ etc.), operating systems such as Microsoft Windows™; web applications (e.g., browsers, such as Internet Explorer™); other applications (such as e.g., audio, video, graphics programs, etc.), to navigate virtual worlds (e.g., in video games) or computer representations of the real world (e.g., Google Street View™), or to interact with three-dimensional virtual objects (e.g., Google Earth™).
FIGS.10A-13B illustrate various exemplary control inputs achievable with free-space hand motions and gestures when using systems and methods in accordance herewith. An example of a compound gesture will be illustrated with reference to an embodiment illustrated byFIGS.10A-10D. These diagrams are merely an example; one of ordinary skill in the art would recognize many other variations, alternatives, and modifications.FIG.10A illustrates asystem500acomprising wired and/or wirelessly communicatively coupled components of atower1002a, adisplay device1004a, akeyboard1006aand optionally a tactile pointing device (e.g., mouse, or track ball)1008a. In some embodiments, computing machinery oftower1002acan be integrated intodisplay device1004ain an “all in one” configuration. A position and motion sensing device (e.g.,1000a-1,1000a-2 and/or1000a-3) comprises all or a portion of the non-tactile interface system ofFIG.5A, that provides for receiving non-tactile input based upon detected position(s), shape(s) and/or motion(s) made by ahand504 and/or any other detectable object serving as a control object. The position and motion sensing device can be embodied as a stand-alone entity or integrated into another device, e.g., a computer, workstation, laptop, notebook, smartphone, tablet, smart watch or other type of wearable intelligent device(s) and/or combinations thereof. The position and motion sensing device can be communicatively coupled with, and/or integrated within, one or more of the other elements ofsystem500a, and can interoperate cooperatively with component(s) of thesystem500a, to provide a non-tactile interface capabilities, such as illustrated by the non-tactile interface system ofFIG.1A.
The motion sensing device (e.g.,1000a-1,1000a-2 and/or1000a-3) is capable of detecting position as well as motion of hands and/or portions of hands and/or other detectable objects (e.g., a pen, a pencil, a stylus, a paintbrush, an eraser, a virtualized tool, and/or a combination thereof), within a region ofspace510afrom which it is convenient for a user to interact withsystem500a.Region510acan be situated in front of, nearby, and/or surroundingsystem500a. In some embodiments, the position and motion sensing device can be integrated directly intodisplay device1004aas integrated device1000a-2 and/orkeyboard1006aas integrated device1000a-3. WhileFIG.10A illustrates devices1000a-1,1000a-2 and1000a-3, it will be appreciated that these are alternative embodiments shown inFIG.10A for clarity sake.Keyboard1006aand position and motion sensing device are representative types of “user input devices.” Other examples of user input devices (not shown inFIG.10A) can be used in conjunction withcomputing environment500a, such as for example, a touch screen, light pen, mouse, track ball, touch pad, data glove and so forth. Accordingly,FIG.10A is representative of but one type of system embodiment. It will be readily apparent to one of ordinary skill in the art that many system types and configurations are suitable for use in conjunction with various embodiments.
Tower1002aand/or position and motion sensing device and/or other elements ofsystem500acan implement functionality to providevirtual control surface1000awithinregion510awith which engagement gestures are sensed and interpreted to facilitate user interactions withsystem1002a. Accordingly, objects and/or motions occurring relative tovirtual control surface1000awithinregion510acan be afforded differing interpretations than like (and/or similar) objects and/or motions otherwise occurring.
As illustrated inFIG.10A control object504 (happens to be a pointing finger in this example) is moving toward an “Erase” button being displayed ondisplay1004aby a user desiring to select the “Erase” button. Now with reference toFIG.10B,control object504 has moved triggered an engagement gesture by means of “virtually contacting”, i.e., intersectingvirtual control surface1000a. At this point, unfortunately, the user has suffered misgivings about executing an “Erase.” Since the “Erase” button has been engaged, however, mere withdrawal of control object504 (i.e., a “dis-intersection”) will not undo the erase operation selected. Accordingly, with reference toFIG.10C, the user makes a wiping motion with a second control object (i.e., the user's other hand in this example) indicating that the user would like to cancel an operation that is underway. Motion by a second control object illustrates a “compound gesture” that includes two or more gestures, sequentially or simultaneously. Compound gestures can be performed using a single control object, or two or more control objects (e.g., one hand, two hands, one stylus and one hand, etc.). In the illustrated case, the point/select and the wipe are two gestures made by two different control objects (two hands) occurring contemporaneously. Now with reference toFIG.10D, when the second part of the compound gesture is recognized, the Erase button is no longer highlighted, indicating that the button is now “unselected”. The user is free to withdraw the first control object from engagement with the virtual control surface without triggering an “Erase” operation.
FIGS.11A and11B illustrate a zooming action performed by two fingers (thumb and index finger) according to various embodiments. These diagrams are merely an example; one of ordinary skill in the art would recognize many other variations, alternatives, and modifications. As illustrated byFIG.11A, an image1106 (happens to be a web page feed) is being displayed by display1104, by a browser or other application. To zoom in, the user commences a motion including engaging a virtual control construct (not shown) interposed between the user and display1104 at an engagement target approximately over the right most column being displayed. InFIG.11B, thefinger tips504a,504bof the user are moved away from each other. This motion is recognized bydevice700 from differences in images captured of thecontrol object portion504a,504band determined to be an engagement gesture including a spreading motion of the thumb and index finger-tip in front of the screen using the techniques described hereinabove. The result of interpreting the engagement gesture is passed to an application (and/or to the OS) owning the display1104. Theapplication owning display704 responds by zooming-in the image of display1104.
FIGS.12A and12B show how a swiping gesture by a finger in engaged mode may serve to scroll through screen content according to various embodiments. These diagrams are merely an example; one of ordinary skill in the art would recognize many other variations, alternatives, and modifications. As illustrated byFIG.12A, an image1206 (happens to be of dogs in this example) is being displayed bydisplay1204. When the user commences a motion relative to and engaged with a virtual control construct (not shown) interposed between the user and display1204 (e.g., at an engagement target approximately over the left-most dog), the user's gesture may be interpreted as a control input for the application displaying the images. For example, inFIG.12B, the user has swiped a finger-tip504afrom left to right. This motion is recognized by device from differences in images captured of thecontrol object portion504aand determined to be an engagement gesture including a swiping motion from left to right that pierces the virtual control construct using the techniques described hereinabove. The result of interpreting the engagement gesture is passed to the image application, which responds by scrolling the image on thedisplay1204. On the other hand, the same gesture performed without engaging the virtual control construct may be passed to the operating system and, for example, used to switch thedisplay1204 between multiple desktops or trigger some other higher-level function. This is just one example of how engagement gestures, i.e., gestures performed relative to a virtual control construct (whether in the engaged or the disengaged mode, or changing between the modes), can be used to provide different types of control input.
FIGS.13A and13B show how the motion of a control object in free space in conjunction with a virtual plane (or a slice of a certain thickness) can provide writing with a virtual pen onto a virtual paper defined in space according to various embodiments. These diagrams are merely an example; one of ordinary skill in the art would recognize many other variations, alternatives, and modifications. As shown inFIG.13A, a user moves atool504b(happens to be a stylus) in free space in front of a writing area being displayed on the screen ofdisplay1304 so as to pierce a virtual control construct (not shown) (happens to be a plane) interposed between the user anddisplay1304. This motion is recognized by device1300 from differences in images captured of thecontrol object portion504band determined to be an engagement gesture including placing a virtual pen onto a virtual paper of space, and is reflected by the contents ofdisplay1304. Continuing motion of thestylus504bin space by the user after engaging the virtual control plane is interpreted as writing with thestylus504bon the virtual paper of space and is reflected by the contents ofdisplay1304. As shown inFIG.13B, when the user dis-engages with the virtual control construct, the virtual pen is lifted from the virtual paper, completing the letter “D” in script matching the handwriting of the user in free space. Accordingly, embodiments can enable, e.g., signature capture, free-hand drawings, etc.
The above-described 3D user-interaction technique enables the user to intuitively control and manipulate the electronic device and virtual objects by simply performing body gestures. Because the gesture-recognition system facilitates rendering of reconstructed 3D images of the gestures with high detection sensitivity, dynamic user interactions for display control are achieved in real time without excessive computational complexity. For example, the user can dynamically control the relationship between his actual movement and the corresponding action displayed on the screen. In addition, the device may display an on-screen indicator to reflect a degree of completion of the user's gesture in real time. Accordingly, embodiments can enable the user to dynamically interact with virtual objects displayed on the screen and advantageously enhances the realism of the virtual environment.
The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without undue experimentation. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive.

Claims (17)

What is claimed is:
1. A method of controlling a machine, comprising:
sensing a variation of position of a control object using an imaging system;
determining, from the variation, one or more primitives describing a characteristic of a control object moving in space, and determining whether motion of the control object corresponds to an engagement gesture;
comparing the one or more primitives to one or more engagement gesture templates in a library of gesture templates;
selecting, based on a result of the comparing, one or more engagement gesture templates corresponding to the one or more primitives;
providing, responsive to the variation, at least one engagement gesture template of the selected one or more engagement gesture templates as an indication of a command to issue to a machine under control;
computationally determining a degree of completion of at least one gesture; and
modifying contents of a display in accordance with the engagement gesture.
2. The method according toclaim 1, further comprising:
computationally determining a degree of completion of at least one gesture; and
wherein modifying contents of a display in accordance with the engagement gesture also comprises modifying contents of the display in accordance of the determined degree of completion, such that the display gradually fills in an indicator with color or shading indicating how close motion of the control object is to completing the gesture.
3. The method according toclaim 2, further comprising:
identifying a scale associated with the control object, the scale being indicative of an actual distance traversed by the control object; and
computationally determining a ratio between the scale and a displayed movement corresponding to an action to be displayed on a presentation device;
displaying the action on the presentation device based on the ratio; and
adjusting the ratio based on an external parameter.
4. The method according toclaim 3, wherein
the external parameter is a ratio of a pixel distance in captured images to a size, in pixels, of a display screen of the presentation device.
5. The method according toclaim 1, wherein the comparing comprises:
disassembling at least a portion of a trajectory of the control object into a set of frequency components and
searching for a set of frequency components among the one or more gesture templates.
6. The method according toclaim 1, wherein the comparing comprises:
disassembling at least a portion of a trajectory of the control object into a set of time dependent frequency components; and
searching for a set of time dependent frequency components among the one or more gesture templates stored in the library.
7. The method according toclaim 6, wherein the disassembling comprises applying wavelet analysis to the portion of the trajectory as a signal over time to determine the set of time dependent frequency components.
8. The method according toclaim 1, wherein the selecting comprises:
determining a similarity between the one or more primitives and the one or more gesture templates by applying at least one similarity determiner; and
providing the similarity as an indication of a quality of correspondence between the one or more primitives and the one or more gesture templates.
9. The method according toclaim 1, wherein the selecting comprises:
performing at least one of scaling and shifting to at least one of the one or more primitives and the one or more gesture templates.
10. The method according toclaim 1, wherein the selecting comprises:
disassembling at least a portion of a trajectory of the control object into a set of frequency components;
filtering the set of frequency components to remove motions associated with jitter; and
searching for a gesture template, of the one or more gesture templates, that matches a frequency component of the filtered set of frequency components.
11. The method according toclaim 10, wherein the filtering comprises applying a Frenet-Serret filter.
12. The method according toclaim 1, wherein determining comprises determining a position or motion of the control object relative to a virtual control construct.
13. The method according toclaim 1, further comprising:
identifying two simultaneous gestures based on variations of positions of the control object;
computationally determining a dominant gesture of the two simultaneous gestures; and
presenting an action on a presentation device based on the dominant gesture.
14. The method according toclaim 1, wherein the providing comprises:
detecting a conflict between a gesture template corresponding to a user-defined gesture and a gesture template corresponding to a predetermined gesture; and
applying a resolution determiner to resolve the conflict by selecting one of: the user-defined gesture and the conflicting predetermined gesture.
15. The method according toclaim 14, wherein the applying comprises selecting the user-defined gesture when the conflict is between the predetermined gesture and the user-defined gesture.
16. A system comprising a memory and one or more processors, the memory being loaded with computer instruction that, when executed on the one or more processors, cause the one or more processors to implement operations comprising:
sensing a variation of position of a control object using an imaging system;
determining, from the variation, one or more primitives describing a characteristic of a control object moving in space, and whether motion of the control object corresponds to an engagement gesture;
comparing the one or more primitives to one or more engagement gesture templates in a library of gesture templates;
selecting, based on a result of the comparing, one or more engagement gesture templates corresponding to the one or more primitives;
providing, responsive to the variation, at least one engagement gesture template of the selected one or more engagement gesture templates as an indication of a command to issue to a machine under control;
computationally determining a degree of completion of at least one gesture; and
modifying contents of a display in accordance with the engagement gesture.
17. A non-transitory computer readable recording medium having computer instructions recorded thereon, the computer instructions, when executed by one or more processor, causing the one or more processor to implement operations comprising:
sensing a variation of position of a control object using an imaging system;
determining, from the variation, one or more primitives describing a characteristic of a control object moving in space, and whether motion of the control object corresponds to an engagement gesture;
comparing the one or more primitives to one or more engagement gesture templates in a library of gesture templates;
selecting, based on a result of the comparing, one or more engagement gesture templates corresponding to the one or more primitives;
providing, responsive to the variation, at least one engagement gesture template of the selected one or more engagement gesture templates as an indication of a command to issue to a machine under control;
computationally determining a degree of completion of at least one gesture; and
modifying contents of a display in accordance with the engagement gesture.
US18/219,5172013-01-152023-07-07Dynamic, free-space user interactions for machine controlActiveUS12204695B2 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US18/219,517US12204695B2 (en)2013-01-152023-07-07Dynamic, free-space user interactions for machine control
US18/988,746US20250130648A1 (en)2013-01-152024-12-19Dynamic, free-space user interactions for machine control

Applications Claiming Priority (19)

Application NumberPriority DateFiling DateTitle
US201361752725P2013-01-152013-01-15
US201361752731P2013-01-152013-01-15
US201361752733P2013-01-152013-01-15
US201361791204P2013-03-152013-03-15
US201361808984P2013-04-052013-04-05
US201361808959P2013-04-052013-04-05
US201361816487P2013-04-262013-04-26
US201361824691P2013-05-172013-05-17
US201361825480P2013-05-202013-05-20
US201361825515P2013-05-202013-05-20
US201361872538P2013-08-302013-08-30
US201361873351P2013-09-032013-09-03
US201361877641P2013-09-132013-09-13
US14/154,730US9501152B2 (en)2013-01-152014-01-14Free-space user interface and control using virtual constructs
US14/155,722US9459697B2 (en)2013-01-152014-01-15Dynamic, free-space user interactions for machine control
US15/279,363US10139918B2 (en)2013-01-152016-09-28Dynamic, free-space user interactions for machine control
US16/195,755US11243612B2 (en)2013-01-152018-11-19Dynamic, free-space user interactions for machine control
US17/666,534US11740705B2 (en)2013-01-152022-02-07Method and system for controlling a machine according to a characteristic of a control object
US18/219,517US12204695B2 (en)2013-01-152023-07-07Dynamic, free-space user interactions for machine control

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US17/666,534ContinuationUS11740705B2 (en)2013-01-152022-02-07Method and system for controlling a machine according to a characteristic of a control object

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US18/988,746ContinuationUS20250130648A1 (en)2013-01-152024-12-19Dynamic, free-space user interactions for machine control

Publications (2)

Publication NumberPublication Date
US20240061511A1 US20240061511A1 (en)2024-02-22
US12204695B2true US12204695B2 (en)2025-01-21

Family

ID=51166264

Family Applications (6)

Application NumberTitlePriority DateFiling Date
US14/155,722Active2034-09-21US9459697B2 (en)2013-01-152014-01-15Dynamic, free-space user interactions for machine control
US15/279,363Active2034-01-27US10139918B2 (en)2013-01-152016-09-28Dynamic, free-space user interactions for machine control
US16/195,755Active2034-01-31US11243612B2 (en)2013-01-152018-11-19Dynamic, free-space user interactions for machine control
US17/666,534ActiveUS11740705B2 (en)2013-01-152022-02-07Method and system for controlling a machine according to a characteristic of a control object
US18/219,517ActiveUS12204695B2 (en)2013-01-152023-07-07Dynamic, free-space user interactions for machine control
US18/988,746PendingUS20250130648A1 (en)2013-01-152024-12-19Dynamic, free-space user interactions for machine control

Family Applications Before (4)

Application NumberTitlePriority DateFiling Date
US14/155,722Active2034-09-21US9459697B2 (en)2013-01-152014-01-15Dynamic, free-space user interactions for machine control
US15/279,363Active2034-01-27US10139918B2 (en)2013-01-152016-09-28Dynamic, free-space user interactions for machine control
US16/195,755Active2034-01-31US11243612B2 (en)2013-01-152018-11-19Dynamic, free-space user interactions for machine control
US17/666,534ActiveUS11740705B2 (en)2013-01-152022-02-07Method and system for controlling a machine according to a characteristic of a control object

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US18/988,746PendingUS20250130648A1 (en)2013-01-152024-12-19Dynamic, free-space user interactions for machine control

Country Status (1)

CountryLink
US (6)US9459697B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230298292A1 (en)*2022-01-312023-09-21Fujifilm Business Innovation Corp.Information processing apparatus, non-transitory computer readable medium storing program, and information processing method

Families Citing this family (248)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2011011546A1 (en)*2009-07-222011-01-27Immersion CorporationSystem and method for providing complex haptic stimulation during input of control gestures, and relating to control of virtual equipment
KR101184460B1 (en)*2010-02-052012-09-19연세대학교 산학협력단Device and method for controlling a mouse pointer
US8929609B2 (en)*2011-01-052015-01-06Qualcomm IncorporatedMethod and apparatus for scaling gesture recognition to physical dimensions of a user
US12260023B2 (en)2012-01-172025-03-25Ultrahaptics IP Two LimitedSystems and methods for machine control
US8638989B2 (en)2012-01-172014-01-28Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US9501152B2 (en)2013-01-152016-11-22Leap Motion, Inc.Free-space user interface and control using virtual constructs
US8693731B2 (en)2012-01-172014-04-08Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US9679215B2 (en)2012-01-172017-06-13Leap Motion, Inc.Systems and methods for machine control
US9070019B2 (en)2012-01-172015-06-30Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US20150253428A1 (en)2013-03-152015-09-10Leap Motion, Inc.Determining positional information for an object in space
US11493998B2 (en)2012-01-172022-11-08Ultrahaptics IP Two LimitedSystems and methods for machine control
US10691219B2 (en)2012-01-172020-06-23Ultrahaptics IP Two LimitedSystems and methods for machine control
EP2817693B1 (en)2012-02-242023-04-05Moscarillo, Thomas, J.Gesture recognition device
US9600169B2 (en)*2012-02-272017-03-21Yahoo! Inc.Customizable gestures for mobile devices
WO2014062730A1 (en)2012-10-152014-04-24Famous Industries, Inc.Efficient manipulation of surfaces in multi-dimensional space using energy agents
US9501171B1 (en)*2012-10-152016-11-22Famous Industries, Inc.Gesture fingerprinting
US10908929B2 (en)2012-10-152021-02-02Famous Industries, Inc.Human versus bot detection using gesture fingerprinting
US10877780B2 (en)2012-10-152020-12-29Famous Industries, Inc.Visibility detection using gesture fingerprinting
US9459697B2 (en)2013-01-152016-10-04Leap Motion, Inc.Dynamic, free-space user interactions for machine control
JP6070211B2 (en)*2013-01-222017-02-01株式会社リコー Information processing apparatus, system, image projection apparatus, information processing method, and program
CN104969148B (en)*2013-03-142018-05-29英特尔公司 Depth-based user interface gesture control
US9916009B2 (en)2013-04-262018-03-13Leap Motion, Inc.Non-tactile interface systems and methods
GB2513884B (en)2013-05-082015-06-17Univ BristolMethod and apparatus for producing an acoustic field
US9436288B2 (en)2013-05-172016-09-06Leap Motion, Inc.Cursor mode switching
US10620775B2 (en)2013-05-172020-04-14Ultrahaptics IP Two LimitedDynamic interactive objects
US10228242B2 (en)2013-07-122019-03-12Magic Leap, Inc.Method and system for determining user input based on gesture
US9857876B2 (en)*2013-07-222018-01-02Leap Motion, Inc.Non-linear motion capture using Frenet-Serret frames
CN111258378B (en)2013-08-072024-12-27耐克创新有限合伙公司 Wrist-worn sports device with gesture recognition and power management
US10281987B1 (en)2013-08-092019-05-07Leap Motion, Inc.Systems and methods of free-space gestural interaction
US20150124566A1 (en)2013-10-042015-05-07Thalmic Labs Inc.Systems, articles and methods for wearable electronic devices employing contact sensors
US10042422B2 (en)2013-11-122018-08-07Thalmic Labs Inc.Systems, articles, and methods for capacitive electromyography sensors
US10188309B2 (en)2013-11-272019-01-29North Inc.Systems, articles, and methods for electromyography sensors
US11921471B2 (en)2013-08-162024-03-05Meta Platforms Technologies, LlcSystems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source
US10846942B1 (en)2013-08-292020-11-24Ultrahaptics IP Two LimitedPredictive information for free space gesture control and communication
US20160239002A1 (en)*2013-09-252016-08-18Schneider Electric Buildings LlcMethod and device for adjusting a set point
JP6261266B2 (en)*2013-10-022018-01-17東芝アルパイン・オートモティブテクノロジー株式会社 Moving body detection device
US9632572B2 (en)2013-10-032017-04-25Leap Motion, Inc.Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US10152136B2 (en)2013-10-162018-12-11Leap Motion, Inc.Velocity field interaction for free space gesture interface and control
US10168873B1 (en)2013-10-292019-01-01Leap Motion, Inc.Virtual interactions for machine control
US9996797B1 (en)2013-10-312018-06-12Leap Motion, Inc.Interactions with virtual objects for machine control
US9996638B1 (en)2013-10-312018-06-12Leap Motion, Inc.Predictive information for free space gesture control and communication
US9535505B2 (en)*2013-11-082017-01-03Polar Electro OyUser interface control in portable system
US9645654B2 (en)*2013-12-042017-05-09Leap Motion, Inc.Initializing predictive information for free space gesture control and communication
US10126822B2 (en)*2013-12-162018-11-13Leap Motion, Inc.User-defined virtual interaction space and manipulation of virtual configuration
US9659403B1 (en)*2014-01-062017-05-23Leap Motion, Inc.Initializing orientation in space for predictive information for free space gesture control and communication
US11221680B1 (en)*2014-03-012022-01-11sigmund lindsay clementsHand gestures used to operate a control panel for a device
US9547433B1 (en)*2014-05-072017-01-17Google Inc.Systems and methods for changing control functions during an input gesture
US9785247B1 (en)2014-05-142017-10-10Leap Motion, Inc.Systems and methods of tracking moving hands and recognizing gestural interactions
KR102265143B1 (en)*2014-05-162021-06-15삼성전자주식회사Apparatus and method for processing input
US9741169B1 (en)2014-05-202017-08-22Leap Motion, Inc.Wearable augmented reality devices with object detection and tracking
US9868449B1 (en)*2014-05-302018-01-16Leap Motion, Inc.Recognizing in-air gestures of a control object to control a vehicular control system
KR102167289B1 (en)*2014-06-032020-10-19엘지전자 주식회사Video display device and operating method thereof
US9646201B1 (en)2014-06-052017-05-09Leap Motion, Inc.Three dimensional (3D) modeling of a complex control object
US9880632B2 (en)2014-06-192018-01-30Thalmic Labs Inc.Systems, devices, and methods for gesture identification
GB201412268D0 (en)*2014-07-102014-08-27Elliptic Laboratories AsGesture control
US20160357263A1 (en)*2014-07-222016-12-08Augumenta LtdHand-gesture-based interface utilizing augmented reality
US10838503B2 (en)*2014-07-312020-11-17Hewlett-Packard Development Company, L.P.Virtual reality clamshell computing device
CN204480228U (en)2014-08-082015-07-15厉动公司motion sensing and imaging device
CN106061394B (en)*2014-08-132019-10-25深圳迈瑞生物医疗电子股份有限公司 A kind of ultrasonic imaging system and its control method
JP3194297U (en)2014-08-152014-11-13リープ モーション, インコーポレーテッドLeap Motion, Inc. Motion sensing control device for automobile and industrial use
KR20160024168A (en)*2014-08-252016-03-04삼성전자주식회사Method for controlling display in electronic device and the electronic device
KR102221071B1 (en)*2014-08-262021-03-02블랙매직 디자인 피티와이 엘티디Methods and systems for positioning and controlling sound images in three-dimensional space
US9946354B2 (en)*2014-08-292018-04-17Microsoft Technology Licensing, LlcGesture processing using a domain-specific gesture language
GB2530036A (en)2014-09-092016-03-16Ultrahaptics LtdMethod and apparatus for modulating haptic feedback
CN107004004B (en)*2014-09-222021-02-02交互数字麦迪逊专利控股公司Using depth perception as an indicator of search, user interest or preference
US9342153B2 (en)*2014-10-142016-05-17Sony CorporationTerminal device and method for controlling operations
KR101636460B1 (en)*2014-11-052016-07-05삼성전자주식회사Electronic device and method for controlling the same
TWI540462B (en)*2014-11-172016-07-01緯創資通股份有限公司 Gesture identification method and device thereof
FI127452B (en)*2014-11-282018-06-15Small Giant Games OyUnit for controlling an object displayed on a display, a method for controlling an object displayed on a display and a computer program product
WO2016099559A1 (en)*2014-12-192016-06-23Hewlett-Packard Development Company, Lp3d navigation mode
US10452195B2 (en)2014-12-302019-10-22Samsung Electronics Co., Ltd.Electronic system with gesture calibration mechanism and method of operation thereof
EP3243120A4 (en)*2015-01-092018-08-22Razer (Asia-Pacific) Pte Ltd.Gesture recognition devices and gesture recognition methods
US10656720B1 (en)2015-01-162020-05-19Ultrahaptics IP Two LimitedMode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
EP3250982B1 (en)*2015-01-302021-09-29Hewlett-Packard Development Company, L.P.Electronic display illumination
US9696795B2 (en)2015-02-132017-07-04Leap Motion, Inc.Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
US10429923B1 (en)2015-02-132019-10-01Ultrahaptics IP Two LimitedInteraction engine for creating a realistic experience in virtual reality/augmented reality environments
CN107534810B (en)2015-02-202019-12-20超级触觉资讯处理有限公司Method for providing improved haptic feedback
CA2976312C (en)2015-02-202023-06-13Ultrahaptics Ip LimitedPerceptions in a haptic system
WO2016168267A1 (en)*2015-04-152016-10-20Thomson LicensingConfiguring translation of three dimensional movement
EP3093614B1 (en)*2015-05-152023-02-22Tata Consultancy Services LimitedSystem and method for estimating three-dimensional measurements of physical objects
KR101653795B1 (en)*2015-05-222016-09-07스튜디오씨드코리아 주식회사Method and apparatus for displaying attributes of plane element
CN104915001B (en)*2015-06-032019-03-15北京嘿哈科技有限公司A kind of screen control method and device
USD788048S1 (en)*2015-06-162017-05-30Fibar Group S.A.Touch-less swipe controller
US9898865B2 (en)2015-06-222018-02-20Microsoft Technology Licensing, LlcSystem and method for spawning drawing surfaces
US10509476B2 (en)*2015-07-022019-12-17Verizon Patent And Licensing Inc.Enhanced device authentication using magnetic declination
US10818162B2 (en)2015-07-162020-10-27Ultrahaptics Ip LtdCalibration techniques in haptic systems
US10607413B1 (en)2015-09-082020-03-31Ultrahaptics IP Two LimitedSystems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
CN105259800A (en)*2015-09-142016-01-20沈阳时尚实业有限公司Gesture-controlled intelligent meter liquid crystal display system
US10444831B2 (en)2015-12-072019-10-15Eyeware Tech SaUser-input apparatus, method and program for user-input
CN105487673B (en)*2016-01-042018-01-09京东方科技集团股份有限公司A kind of man-machine interactive system, method and device
KR102508831B1 (en)*2016-02-172023-03-10삼성전자주식회사Remote image transmission system, display apparatus and guide displaying method of thereof
JP5996138B1 (en)*2016-03-182016-09-21株式会社コロプラ GAME PROGRAM, METHOD, AND GAME SYSTEM
CN108885496B (en)*2016-03-292021-12-10索尼公司Information processing apparatus, information processing method, and program
CN107346172B (en)*2016-05-052022-08-30富泰华工业(深圳)有限公司Action sensing method and device
TWI598809B (en)*2016-05-272017-09-11鴻海精密工業股份有限公司Gesture control system and method
WO2017208637A1 (en)*2016-05-312017-12-07ソニー株式会社Information processing device, information processing method, and program
US11331045B1 (en)2018-01-252022-05-17Facebook Technologies, LlcSystems and methods for mitigating neuromuscular signal artifacts
US11216069B2 (en)2018-05-082022-01-04Facebook Technologies, LlcSystems and methods for improved speech recognition using neuromuscular information
US10687759B2 (en)2018-05-292020-06-23Facebook Technologies, LlcShielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods
US10990174B2 (en)2016-07-252021-04-27Facebook Technologies, LlcMethods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
EP3487402B1 (en)2016-07-252021-05-05Facebook Technologies, LLCMethods and apparatus for inferring user intent based on neuromuscular signals
CN110312471B (en)2016-07-252022-04-29脸谱科技有限责任公司Adaptive system for deriving control signals from neuromuscular activity measurements
WO2018022657A1 (en)2016-07-252018-02-01Ctrl-Labs CorporationSystem and method for measuring the movements of articulated rigid bodies
US11179066B2 (en)2018-08-132021-11-23Facebook Technologies, LlcReal-time spike detection and identification
US10268275B2 (en)2016-08-032019-04-23Ultrahaptics Ip LtdThree-dimensional perceptions in haptic systems
US10628013B2 (en)*2016-09-022020-04-21Accenture Global Solutions LimitedClosed-loop display control for multi-dimensional user interface generation
CN107885316A (en)*2016-09-292018-04-06阿里巴巴集团控股有限公司A kind of exchange method and device based on gesture
US10296091B2 (en)*2016-10-132019-05-21Immersion CorporationContextual pressure sensing haptic responses
CN107977071B (en)*2016-10-242020-02-28中国移动通信有限公司研究院 An operating method and device suitable for a space system
US10540491B1 (en)2016-10-252020-01-21Wells Fargo Bank, N.A.Virtual and augmented reality signatures
US10943578B2 (en)2016-12-132021-03-09Ultrahaptics Ip LtdDriving techniques for phased-array systems
US10339771B2 (en)*2017-02-032019-07-02International Business Machines CoporationThree-dimensional holographic visual and haptic object warning based on visual recognition analysis
US11435739B2 (en)2017-02-082022-09-06L. Samuel A KassatlyInterface and method for controlling the operation of an autonomously travelling object
US11048407B1 (en)2017-02-082021-06-29Michelle M KassatlyInterface and method for self-correcting a travel path of a physical object
CN106951080A (en)*2017-03-162017-07-14联想(北京)有限公司Exchange method and device for controlling dummy object
AU2018280144B2 (en)*2017-06-082021-03-25Medos International SàrlUser interface systems for sterile fields and other working environments
US11048329B1 (en)2017-07-272021-06-29Emerge Now Inc.Mid-air ultrasonic haptic interface for immersive computing environments
EP3454177B1 (en)*2017-09-112020-06-10Barco N.V.Method and system for efficient gesture control of equipment
WO2019079757A1 (en)2017-10-192019-04-25Ctrl-Labs CorporationSystems and methods for identifying biological structures associated with neuromuscular source signals
US11531395B2 (en)2017-11-262022-12-20Ultrahaptics Ip LtdHaptic effects from focused acoustic fields
EP3502835B1 (en)*2017-12-202025-05-14Nokia Technologies OyGesture control of a data processing apparatus
EP3729418B1 (en)2017-12-222024-11-20Ultrahaptics Ip LtdMinimizing unwanted responses in haptic systems
EP3729417B1 (en)2017-12-222025-09-10Ultrahaptics Ip LtdTracking in haptic systems
US11057238B2 (en)2018-01-082021-07-06Brilliant Home Technology, Inc.Automatic scene creation using home device control
US11567573B2 (en)2018-09-202023-01-31Meta Platforms Technologies, LlcNeuromuscular text entry, writing and drawing in augmented reality systems
US11481030B2 (en)2019-03-292022-10-25Meta Platforms Technologies, LlcMethods and apparatus for gesture detection and classification
US11493993B2 (en)2019-09-042022-11-08Meta Platforms Technologies, LlcSystems, methods, and interfaces for performing inputs based on neuromuscular control
WO2019147956A1 (en)2018-01-252019-08-01Ctrl-Labs CorporationVisualization of reconstructed handstate information
US10937414B2 (en)2018-05-082021-03-02Facebook Technologies, LlcSystems and methods for text input using neuromuscular information
US10504286B2 (en)2018-01-252019-12-10Ctrl-Labs CorporationTechniques for anonymizing neuromuscular signal data
US11907423B2 (en)2019-11-252024-02-20Meta Platforms Technologies, LlcSystems and methods for contextualized interactions with an environment
WO2019147928A1 (en)2018-01-252019-08-01Ctrl-Labs CorporationHandstate reconstruction based on multiple inputs
US11961494B1 (en)2019-03-292024-04-16Meta Platforms Technologies, LlcElectromagnetic interference reduction in extended reality environments
EP3743901A4 (en)2018-01-252021-03-31Facebook Technologies, Inc. REAL-TIME PROCESSING OF HAND REPRESENTATION MODEL ESTIMATES
WO2019147958A1 (en)2018-01-252019-08-01Ctrl-Labs CorporationUser-controlled tuning of handstate representation model parameters
US11150730B1 (en)2019-04-302021-10-19Facebook Technologies, LlcDevices, systems, and methods for controlling computing devices via neuromuscular signals of users
EP3742961A4 (en)2018-01-252021-03-31Facebook Technologies, Inc. CALIBRATION TECHNIQUES FOR HAND REPRESENTATION MODELING USING NEUROMUSCULAR SIGNALS
JP7391502B2 (en)*2018-02-202023-12-05キヤノン株式会社 Image processing device, image processing method and program
WO2019191002A1 (en)*2018-03-262019-10-03Nvidia CorporationObject movement behavior learning
US10775892B2 (en)*2018-04-202020-09-15Immersion CorporationSystems and methods for multi-user shared virtual and augmented reality-based haptics
CA3098642C (en)2018-05-022022-04-19Ultrahaptics Ip LtdBlocking plate structure for improved acoustic transmission efficiency
US10592001B2 (en)2018-05-082020-03-17Facebook Technologies, LlcSystems and methods for improved speech recognition using neuromuscular information
CN112469469B (en)2018-05-252024-11-12元平台技术有限公司 Method and apparatus for providing submuscular control
US11875012B2 (en)2018-05-252024-01-16Ultrahaptics IP Two LimitedThrowable interface for augmented reality and virtual reality environments
US11188154B2 (en)*2018-05-302021-11-30International Business Machines CorporationContext dependent projection of holographic objects
CN112585600A (en)2018-06-142021-03-30脸谱科技有限责任公司User identification and authentication using neuromuscular signatures
US10732812B2 (en)2018-07-062020-08-04Lindsay CorporationComputer-implemented methods, computer-readable media and electronic devices for virtual control of agricultural devices
US20200012350A1 (en)*2018-07-082020-01-09Youspace, Inc.Systems and methods for refined gesture recognition
WO2020018892A1 (en)2018-07-192020-01-23Ctrl-Labs CorporationMethods and apparatus for improved signal robustness for a wearable neuromuscular recording device
US10985972B2 (en)2018-07-202021-04-20Brilliant Home Technoloy, Inc.Distributed system of home device controllers
WO2020047429A1 (en)2018-08-312020-03-05Ctrl-Labs CorporationCamera-guided interpretation of neuromuscular signals
US11098951B2 (en)2018-09-092021-08-24Ultrahaptics Ip LtdUltrasonic-assisted liquid manipulation
US10921764B2 (en)2018-09-262021-02-16Facebook Technologies, LlcNeuromuscular control of physical objects in an environment
WO2020072915A1 (en)2018-10-052020-04-09Ctrl-Labs CorporationUse of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
CN109524853B (en)*2018-10-232020-11-24珠海市杰理科技股份有限公司 Gesture recognition socket and socket control method
EP3886693A4 (en)2018-11-272022-06-08Facebook Technologies, LLC.Methods and apparatus for autocalibration of a wearable electrode sensor system
US11093041B2 (en)*2018-11-302021-08-17International Business Machines CorporationComputer system gesture-based graphical user interface control
EP3906462B1 (en)2019-01-042025-06-18Ultrahaptics IP LtdMid-air haptic textures
US12373033B2 (en)2019-01-042025-07-29Ultrahaptics Ip LtdMid-air haptic textures
EP3680858A1 (en)*2019-01-112020-07-15Tata Consultancy Services LimitedDynamic multi-camera tracking of moving objects in motion streams
EP3691277A1 (en)*2019-01-302020-08-05Ubimax GmbHComputer-implemented method and system of augmenting a video stream of an environment
WO2020171108A1 (en)*2019-02-192020-08-27株式会社NttドコモInformation processing device
US10905383B2 (en)2019-02-282021-02-02Facebook Technologies, LlcMethods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
WO2020179813A1 (en)*2019-03-052020-09-10株式会社NttドコモInformation processing apparatus
CN109993073B (en)*2019-03-142021-07-02北京工业大学 A complex dynamic gesture recognition method based on Leap Motion
US11842517B2 (en)2019-04-122023-12-12Ultrahaptics Ip LtdUsing iterative 3D-model fitting for domain adaptation of a hand-pose-estimation neural network
US11442550B2 (en)*2019-05-062022-09-13Samsung Electronics Co., Ltd.Methods for gesture recognition and control
US11150751B2 (en)*2019-05-092021-10-19Dell Products, L.P.Dynamically reconfigurable touchpad
US11243739B2 (en)*2019-05-242022-02-08Bose CorporationComputer-implemented tools and methods for gesture subscription
US10802667B1 (en)*2019-06-032020-10-13Bank Of America CorporationTactile response for user interaction with a three dimensional rendering
US10825245B1 (en)2019-06-032020-11-03Bank Of America CorporationThree dimensional rendering for a mobile device
US11360028B2 (en)*2019-06-202022-06-14Cilag Gmbh InternationalSuper resolution and color motion artifact correction in a pulsed hyperspectral, fluorescence, and laser mapping imaging system
US11398011B2 (en)2019-06-202022-07-26Cilag Gmbh InternationalSuper resolution and color motion artifact correction in a pulsed laser mapping imaging system
US11276148B2 (en)2019-06-202022-03-15Cilag Gmbh InternationalSuper resolution and color motion artifact correction in a pulsed fluorescence imaging system
US11237270B2 (en)*2019-06-202022-02-01Cilag Gmbh InternationalHyperspectral, fluorescence, and laser mapping imaging with fixed pattern noise cancellation
US11265491B2 (en)2019-06-202022-03-01Cilag Gmbh InternationalFluorescence imaging with fixed pattern noise cancellation
US11793399B2 (en)2019-06-202023-10-24Cilag Gmbh InternationalSuper resolution and color motion artifact correction in a pulsed hyperspectral imaging system
US11233960B2 (en)2019-06-202022-01-25Cilag Gmbh InternationalFluorescence imaging with fixed pattern noise cancellation
US11221414B2 (en)*2019-06-202022-01-11Cilag Gmbh InternationalLaser mapping imaging with fixed pattern noise cancellation
US11187658B2 (en)2019-06-202021-11-30Cilag Gmbh InternationalFluorescence imaging with fixed pattern noise cancellation
US11288772B2 (en)2019-06-202022-03-29Cilag Gmbh InternationalSuper resolution and color motion artifact correction in a pulsed fluorescence imaging system
US11150800B1 (en)*2019-09-162021-10-19Facebook Technologies, LlcPinch-based input systems and methods
US11275453B1 (en)2019-09-302022-03-15Snap Inc.Smart ring for manipulating virtual objects displayed by a wearable device
CN110908504B (en)*2019-10-102021-03-23浙江大学 An Augmented Reality Museum Collaborative Interaction Method and System
US11553295B2 (en)2019-10-132023-01-10Ultraleap LimitedDynamic capping with virtual microphones
US11374586B2 (en)2019-10-132022-06-28Ultraleap LimitedReducing harmonic distortion by dithering
FR3103341B1 (en)*2019-11-202022-07-22Embodme SYSTEM FOR GENERATING A SIGNAL FROM A TOUCH CONTROL AND AN OPTICAL CONTROL
US11113793B2 (en)*2019-11-202021-09-07Pacific future technology (Shenzhen) Co., LtdMethod and apparatus for smoothing a motion trajectory in a video
US12089953B1 (en)2019-12-042024-09-17Meta Platforms Technologies, LlcSystems and methods for utilizing intrinsic current noise to measure interface impedances
US11175730B2 (en)*2019-12-062021-11-16Facebook Technologies, LlcPosture-based virtual space configurations
EP3835924A1 (en)2019-12-132021-06-16Treye Tech UG (haftungsbeschränkt)Computer system and method for human-machine interaction
US11850498B2 (en)*2019-12-132023-12-26Rapsodo Pte. Ltd.Kinematic analysis of user form
CN111126287B (en)*2019-12-252022-06-03武汉大学 A deep learning detection method for dense targets in remote sensing images
US11715453B2 (en)2019-12-252023-08-01Ultraleap LimitedAcoustic transducer structures
US11528028B2 (en)2020-01-052022-12-13Brilliant Home Technology, Inc.Touch-based control device to detect touch input without blind spots
US11755136B2 (en)2020-01-052023-09-12Brilliant Home Technology, Inc.Touch-based control device for scene invocation
US11469916B2 (en)2020-01-052022-10-11Brilliant Home Technology, Inc.Bridging mesh device controller for implementing a scene
CN111273769B (en)*2020-01-152022-06-17Oppo广东移动通信有限公司 Device control method, device, electronic device and storage medium
KR20210101858A (en)*2020-02-112021-08-19삼성전자주식회사Method for operating function based on gesture recognition and electronic device supporting the same
CN111273778B (en)*2020-02-142023-11-07北京百度网讯科技有限公司Method and device for controlling electronic equipment based on gestures
US11277597B1 (en)2020-03-312022-03-15Snap Inc.Marker-based guided AR experience
US11474614B2 (en)2020-04-262022-10-18Huawei Technologies Co., Ltd.Method and device for adjusting the control-display gain of a gesture controlled electronic device
US11798429B1 (en)2020-05-042023-10-24Snap Inc.Virtual tutorials for musical instruments with finger tracking in augmented reality
US11520399B2 (en)2020-05-262022-12-06Snap Inc.Interactive augmented reality experiences using positional tracking
US11257280B1 (en)2020-05-282022-02-22Facebook Technologies, LlcElement-based switching of ray casting rules
US11372518B2 (en)2020-06-032022-06-28Capital One Services, LlcSystems and methods for augmented or mixed reality writing
US11816267B2 (en)2020-06-232023-11-14Ultraleap LimitedFeatures of airborne ultrasonic fields
US11714544B2 (en)2020-06-252023-08-01Microsoft Technology Licensing, LlcGesture definition for multi-screen devices
US11256336B2 (en)2020-06-292022-02-22Facebook Technologies, LlcIntegration of artificial reality interaction modes
US11178376B1 (en)2020-09-042021-11-16Facebook Technologies, LlcMetering for display modes in artificial reality
US11886639B2 (en)2020-09-172024-01-30Ultraleap LimitedUltrahapticons
US11925863B2 (en)*2020-09-182024-03-12Snap Inc.Tracking hand gestures for interactive game control in augmented reality
WO2022066185A1 (en)*2020-09-282022-03-31Hewlett-Packard Development Company, L.P.Application gestures
EP3985491A1 (en)*2020-10-192022-04-20ameria AGControl method for touchless gesture control
US20220137787A1 (en)*2020-10-292022-05-05XRSpace CO., LTD.Method and system for showing a cursor for user interaction on a display device
WO2022101642A1 (en)*2020-11-162022-05-19Ultraleap LimitedIntent driven dynamic gesture recognition system
US11921931B2 (en)*2020-12-172024-03-05Huawei Technologies Co., Ltd.Methods and systems for multi-precision discrete control of a user interface control element of a gesture-controlled device
EP4268066A1 (en)2020-12-222023-11-01Snap Inc.Media content player on an eyewear device
US12229342B2 (en)*2020-12-222025-02-18Snap Inc.Gesture control on an eyewear device
US12105283B2 (en)2020-12-222024-10-01Snap Inc.Conversation interface on an eyewear device
KR20230124732A (en)2020-12-292023-08-25스냅 인코포레이티드 Fine hand gestures to control virtual and graphical elements
KR20230124077A (en)2020-12-302023-08-24스냅 인코포레이티드 Augmented reality precision tracking and display
US11740313B2 (en)2020-12-302023-08-29Snap Inc.Augmented reality precision tracking and display
US11294475B1 (en)2021-02-082022-04-05Facebook Technologies, LlcArtificial reality multi-modal input switching model
US11531402B1 (en)2021-02-252022-12-20Snap Inc.Bimanual gestures for controlling virtual and graphical elements
EP4302166A1 (en)*2021-03-032024-01-10Telefonaktiebolaget LM Ericsson (publ)A computer a software module arrangement, a circuitry arrangement, a user equipment and a method for an improved user interface controlling multiple applications simultaneously
US11868531B1 (en)2021-04-082024-01-09Meta Platforms Technologies, LlcWearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof
EP4320502A1 (en)2021-04-082024-02-14Snap, Inc.Bimanual interactions between mapped hand regions for controlling virtual and graphical elements
US11861070B2 (en)*2021-04-192024-01-02Snap Inc.Hand gestures for animating and controlling virtual and graphical elements
KR20220149191A (en)*2021-04-302022-11-08삼성전자주식회사Electronic device for executing function based on hand gesture and method for operating thereof
US11947728B2 (en)2021-04-302024-04-02Samsung Electronics Co., Ltd.Electronic device for executing function based on hand gesture and method for operating thereof
US11693482B2 (en)*2021-05-282023-07-04Huawei Technologies Co., Ltd.Systems and methods for controlling virtual widgets in a gesture-controlled device
WO2023017753A1 (en)*2021-08-102023-02-16本田技研工業株式会社Learning device, learning method, and program
WO2023044352A1 (en)*2021-09-152023-03-23Neural Lab, Inc.Touchless image-based input interface
US12019847B2 (en)2021-10-112024-06-25James Christopher MalinContactless interactive interface
US11803247B2 (en)*2021-10-252023-10-31Kyndryl, Inc.Gesture-based control of plural devices in an environment
CN114167980B (en)*2021-11-182024-05-07深圳市鸿合创新信息技术有限责任公司Gesture processing method, gesture processing device, electronic equipment and readable storage medium
US12164741B2 (en)2022-04-112024-12-10Meta Platforms Technologies, LlcActivating a snap point in an artificial reality environment
US12360663B2 (en)2022-04-262025-07-15Snap Inc.Gesture-based keyboard text entry
US12327302B2 (en)*2022-05-182025-06-10Snap Inc.Hand-tracked text selection and modification
US12373096B2 (en)2022-05-312025-07-29Snap Inc.AR-based virtual keyboard
CN115495882B (en)*2022-08-222024-02-27北京科技大学Method and device for constructing robot motion primitive library under uneven terrain
US20240070994A1 (en)*2022-08-312024-02-29Snap Inc.One-handed zoom operation for ar/vr devices
US12189867B2 (en)*2023-03-172025-01-07NEX Team Inc.Methods and systems for offloading pose processing to a mobile device for motion tracking on a hardware device without a camera
CN116627260A (en)*2023-07-242023-08-22成都赛力斯科技有限公司Method and device for idle operation, computer equipment and storage medium
US12408251B2 (en)*2023-07-302025-09-02Dell Products L.P.Keyboard backlight control
CN117036327B (en)*2023-08-222024-03-12广州市疾病预防控制中心(广州市卫生检验中心、广州市食品安全风险监测与评估中心、广州医科大学公共卫生研究院) A protective equipment inspection method, system, equipment and medium
JP2025035219A (en)*2023-09-012025-03-13キヤノン株式会社 Information processing device, head-mounted display device, and information processing method

Citations (485)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US2665041A (en)1952-01-091954-01-05Daniel J MaffucciCombination form for washing woolen socks
US3064704A (en)1961-01-261962-11-20Hutchinson Cie EtsPneumatic assembly for a vehicle wheel
US4175862A (en)1975-08-271979-11-27Solid Photography Inc.Arrangement for sensing the geometric characteristics of an object
US4876455A (en)1988-02-251989-10-24Westinghouse Electric Corp.Fiber optic solder joint inspection system
US4879659A (en)1987-11-241989-11-07Bowlin William PLog processing systems
US4893223A (en)1989-01-101990-01-09Northern Telecom LimitedIllumination devices for inspection systems
JPH02236407A (en)1989-03-101990-09-19Agency Of Ind Science & TechnolMethod and device for measuring shape of object
US5038258A (en)1989-03-021991-08-06Carl-Zeiss-StiftungIlluminating arrangement for illuminating an object with incident light
US5134661A (en)1991-03-041992-07-28Reinsch Roger AMethod of capture and analysis of digitized image data
DE4201934A1 (en)1992-01-241993-07-29Siemens Ag GESTIC COMPUTER
US5282067A (en)1991-10-071994-01-25California Institute Of TechnologySelf-amplified optical pattern recognition system
WO1994026057A1 (en)1993-04-291994-11-10Scientific Generics LimitedBackground separation for still and moving images
US5434617A (en)1993-01-291995-07-18Bell Communications Research, Inc.Automatic tracking camera control system
US5454043A (en)1993-07-301995-09-26Mitsubishi Electric Research Laboratories, Inc.Dynamic and static hand gesture recognition through low-level image analysis
JPH08261721A (en)1995-03-221996-10-11Teijin LtdDeterioration detecting method for image processing illuminating means
US5574511A (en)1995-10-181996-11-12Polaroid CorporationBackground replacement for an image
US5581276A (en)1992-09-081996-12-03Kabushiki Kaisha Toshiba3D human interface apparatus using motion recognition based on dynamic image processing
US5594469A (en)1995-02-211997-01-14Mitsubishi Electric Information Technology Center America Inc.Hand gesture machine control system
US5659475A (en)1994-03-171997-08-19Brown; Daniel M.Electronic air traffic control system for use in airport towers
JPH09259278A (en)1996-03-251997-10-03Matsushita Electric Ind Co Ltd Image processing device
US5691737A (en)1993-09-211997-11-25Sony CorporationSystem for explaining an exhibit using spectacle-type displays
US5742263A (en)1995-12-181998-04-21Telxon CorporationHead tracking system for a head mounted display system
US5900863A (en)1995-03-161999-05-04Kabushiki Kaisha ToshibaMethod and apparatus for controlling computer without touching input device
US5940538A (en)1995-08-041999-08-17Spiegel; EhudApparatus and methods for object border tracking
US6002808A (en)1996-07-261999-12-14Mitsubishi Electric Information Technology Center America, Inc.Hand gesture control system
JP2000023038A (en)1998-06-302000-01-21Toshiba Corp Image extraction device
US6031661A (en)1997-01-232000-02-29Yokogawa Electric CorporationConfocal microscopic equipment
US6031161A (en)1998-02-042000-02-29Dekalb Genetics CorporationInbred corn plant GM9215 and seeds thereof
EP0999542A1 (en)1998-11-022000-05-10Ncr International Inc.Methods of and apparatus for hands-free operation of a voice recognition system
US6072494A (en)1997-10-152000-06-06Electric Planet, Inc.Method and apparatus for real-time gesture recognition
US6075895A (en)1997-06-202000-06-13HoloplexMethods and apparatus for gesture recognition based on templates
US6147678A (en)1998-12-092000-11-14Lucent Technologies Inc.Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6154558A (en)1998-04-222000-11-28Hsieh; Kuan-HongIntention identification method
US6181343B1 (en)1997-12-232001-01-30Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6184326B1 (en)1992-03-202001-02-06Fina Technology, Inc.Syndiotactic polypropylene
US6184926B1 (en)1996-11-262001-02-06Ncr CorporationSystem and method for detecting a human face in uncontrolled environments
US6195104B1 (en)1997-12-232001-02-27Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6204852B1 (en)1998-12-092001-03-20Lucent Technologies Inc.Video hand image three-dimensional computer interface
US6252598B1 (en)1997-07-032001-06-26Lucent Technologies Inc.Video hand image computer interface
US6263091B1 (en)1997-08-222001-07-17International Business Machines CorporationSystem and method for identifying foreground and background portions of digitized images
US20010044858A1 (en)1999-12-212001-11-22Junichi RekimotoInformation input/output system and information input/output method
US20010052985A1 (en)2000-06-122001-12-20Shuji OnoImage capturing apparatus and distance measuring method
US20020008211A1 (en)2000-02-102002-01-24Peet KaskFluorescence intensity multiple distributions analysis: concurrent determination of diffusion times and molecular brightness
US20020008139A1 (en)2000-04-212002-01-24Albertelli Lawrence E.Wide-field extended-depth doubly telecentric catadioptric optical system for digital imaging
US6346933B1 (en)1999-09-212002-02-12Seiko Epson CorporationInteractive display presentation system
US20020021287A1 (en)2000-02-112002-02-21Canesta, Inc.Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device
US20020041327A1 (en)2000-07-242002-04-11Evan HildrethVideo-based image control system
JP2002133400A (en)2000-10-242002-05-10Oki Electric Ind Co LtdObject extraction image processor
US20020080094A1 (en)2000-12-222002-06-27Frank BioccaTeleportal face-to-face system
US6417970B1 (en)2000-06-082002-07-09Interactive Imaging SystemsTwo stage optical system for head mounted display
US20020105484A1 (en)2000-09-252002-08-08Nassir NavabSystem and method for calibrating a monocular optical see-through head-mounted display system for augmented reality
US6463402B1 (en)2000-03-062002-10-08Ralph W. BennettInfeed log scanning for lumber optimization
US6493041B1 (en)1998-06-302002-12-10Sun Microsystems, Inc.Method and apparatus for the detection of motion in video
US6492986B1 (en)1997-06-022002-12-10The Trustees Of The University Of PennsylvaniaMethod for human face shape and motion estimation based on integrating optical flow and deformable models
US6498628B2 (en)1998-10-132002-12-24Sony CorporationMotion sensing interface
US20030053659A1 (en)2001-06-292003-03-20Honeywell International Inc.Moving object assessment system and method
US20030053658A1 (en)2001-06-292003-03-20Honeywell International Inc.Surveillance system and methods regarding same
US20030081141A1 (en)2001-09-172003-05-01Douglas MazzapicaBrightness adjustment method
US6578203B1 (en)1999-03-082003-06-10Tazwell L. Anderson, Jr.Audio/video signal distribution system for head mounted displays
US20030123703A1 (en)2001-06-292003-07-03Honeywell International Inc.Method for monitoring a moving object and system regarding same
US6603867B1 (en)1998-09-082003-08-05Fuji Xerox Co., Ltd.Three-dimensional object identifying system
US20030152289A1 (en)2002-02-132003-08-14Eastman Kodak CompanyMethod and system for determining image orientation
JP2003256814A (en)2002-02-272003-09-12Olympus Optical Co LtdSubstrate checking device
US6629065B1 (en)1998-09-302003-09-30Wisconsin Alumni Research FoundationMethods and apparata for rapid computer-aided design of objects in virtual reality and other environments
US20030202697A1 (en)2002-04-252003-10-30Simard Patrice Y.Segmented layered image system
US6661918B1 (en)1998-12-042003-12-09Interval Research CorporationBackground estimation and segmentation based on range and color
US6674877B1 (en)2000-02-032004-01-06Microsoft CorporationSystem and method for visually tracking occluded objects in real time
US6702494B2 (en)2002-03-272004-03-09Geka Brush GmbhCosmetic unit
US6734911B1 (en)1999-09-302004-05-11Koninklijke Philips Electronics N.V.Tracking camera using a lens that generates both wide-angle and narrow-angle views
US6738424B1 (en)1999-12-272004-05-18Objectvideo, Inc.Scene model generation from video for use in video processing
US20040103111A1 (en)2002-11-252004-05-27Eastman Kodak CompanyMethod and computer program product for determining an area of importance in an image using eye monitoring information
US20040125984A1 (en)2002-12-192004-07-01Wataru ItoObject tracking method and object tracking apparatus
US20040125228A1 (en)2001-07-252004-07-01Robert DoughertyApparatus and method for determining the range of remote objects
US20040145809A1 (en)2001-03-202004-07-29Karl-Heinz BrennerElement for the combined symmetrization and homogenization of a bundle of beams
US6771294B1 (en)1999-12-292004-08-03Petri PulliUser interface
US20040155877A1 (en)2003-02-122004-08-12Canon Europa N.V.Image processing apparatus
JP2004246252A (en)2003-02-172004-09-02Takenaka Komuten Co LtdApparatus and method for collecting image information
US6798628B1 (en)2000-11-172004-09-28Pass & Seymour, Inc.Arc fault circuit detector having two arc fault detection levels
US6804654B2 (en)2002-02-112004-10-12Telemanager Technologies, Inc.System and method for providing prescription services using voice recognition
US6804656B1 (en)1999-06-232004-10-12Visicu, Inc.System and method for providing continuous, expert network critical care services from a remote location(s)
US20040212725A1 (en)2003-03-192004-10-28Ramesh RaskarStylized rendering using a multi-flash camera
US6814656B2 (en)2001-03-202004-11-09Luis J. RodriguezSurface treatment disks for rotary tools
US6819796B2 (en)2000-01-062004-11-16Sharp Kabushiki KaishaMethod of and apparatus for segmenting a pixellated image
EP1477924A2 (en)2003-03-312004-11-17HONDA MOTOR CO., Ltd.Gesture recognition apparatus, method and program
WO2004114220A1 (en)2003-06-172004-12-29Brown UniversityMethod and apparatus for model-based detection of structure in projection data
US20050007673A1 (en)2001-05-232005-01-13Chaoulov Vesselin I.Compact microlenslet arrays imager
DE10326035A1 (en)2003-06-102005-01-13Hema Elektronik-Fertigungs- Und Vertriebs Gmbh Method for adaptive error detection on a structured surface
US20050068518A1 (en)2003-08-292005-03-31Baney Douglas M.Position determination that is responsive to a retro-reflective object
US20050094019A1 (en)2003-10-312005-05-05Grosvenor David A.Camera control
US6901170B1 (en)2000-09-052005-05-31Fuji Xerox Co., Ltd.Image processing device and recording medium
US20050131607A1 (en)1995-06-072005-06-16Automotive Technologies International Inc.Method and arrangement for obtaining information about vehicle occupants
US6919880B2 (en)2001-06-012005-07-19Smart Technologies Inc.Calibrating camera offsets to facilitate object position determination using triangulation
US20050156888A1 (en)2004-01-162005-07-21Tong XiePosition determination and motion tracking
US20050168578A1 (en)2004-02-042005-08-04William GobushOne camera stereo system
US6950534B2 (en)1998-08-102005-09-27Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US20050238201A1 (en)2004-04-152005-10-27Atid ShamaieTracking bimanual movements
US20050236558A1 (en)2004-04-222005-10-27Nobuo NabeshimaDisplacement detection apparatus
JP2006019526A (en)2004-07-012006-01-19Ibiden Co LtdOptical element, package substrate, and device for optical communication
US20060017807A1 (en)2004-07-262006-01-26Silicon Optix, Inc.Panoramic vision system and method
US6993157B1 (en)1999-05-182006-01-31Sanyo Electric Co., Ltd.Dynamic image processing method and device and medium
US20060028656A1 (en)2003-12-182006-02-09Shalini VenkateshMethod and apparatus for determining surface displacement based on an image of a retroreflector attached to the surface
US20060029296A1 (en)2004-02-152006-02-09King Martin TData capture from rendered documents using handheld device
US20060034545A1 (en)2001-03-082006-02-16Universite Joseph FourierQuantitative analysis, visualization and movement correction in dynamic processes
WO2006020846A2 (en)2004-08-112006-02-23THE GOVERNMENT OF THE UNITED STATES OF AMERICA as represented by THE SECRETARY OF THE NAVY Naval Research LaboratorySimulated locomotion method and apparatus
US20060050979A1 (en)2004-02-182006-03-09Isao KawaharaMethod and device of image correction
US20060072105A1 (en)2003-05-192006-04-06Micro-Epsilon Messtechnik Gmbh & Co. KgMethod and apparatus for optically controlling the quality of objects having a circular edge
GB2419433A (en)2004-10-202006-04-26Glasgow School Of ArtAutomated Gesture Recognition
US20060098899A1 (en)2004-04-012006-05-11King Martin THandheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US20060204040A1 (en)2005-03-072006-09-14Freeman William TOccluding contour detection and storage for digital photography
US20060210112A1 (en)1998-08-102006-09-21Cohen Charles JBehavior recognition system
JP2006259829A (en)2005-03-152006-09-28Omron CorpImage processing system, image processor and processing method, recording medium, and program
US20060262421A1 (en)2005-05-192006-11-23Konica Minolta Photo Imaging, Inc.Optical unit and image capturing apparatus including the same
US7152024B2 (en)2000-08-302006-12-19Microsoft CorporationFacial image processing methods and systems
US20060290950A1 (en)2005-06-232006-12-28Microsoft CorporationImage superresolution through edge extraction and contrast enhancement
US20070014466A1 (en)2005-07-082007-01-18Leo BaldwinAchieving convergent light rays emitted by planar array of light sources
US20070042346A1 (en)2004-11-242007-02-22Battelle Memorial InstituteMethod and apparatus for detection of rare cells
US20070086621A1 (en)2004-10-132007-04-19Manoj AggarwalFlexible layer tracking with weak online appearance model
US7213707B2 (en)2001-12-112007-05-08Walgreen Co.Product shipping and display carton
US20070130547A1 (en)2005-12-012007-06-07Navisense, LlcMethod and system for touchless user interface control
CN1984236A (en)2005-12-142007-06-20浙江工业大学Method for collecting characteristics in telecommunication flow information video detection
US7244233B2 (en)2003-07-292007-07-17Ntd Laboratories, Inc.System and method for utilizing shape analysis to assess fetal abnormality
US7257237B1 (en)2003-03-072007-08-14Sandia CorporationReal time markerless motion tracking using linked kinematic chains
US7259873B2 (en)2004-03-252007-08-21Sikora, AgMethod for measuring the dimension of a non-circular cross-section of an elongated article in particular of a flat cable or a sector cable
US20070206719A1 (en)2006-03-022007-09-06General Electric CompanySystems and methods for improving a resolution of an image
US20070211023A1 (en)2006-03-132007-09-13Navisense. LlcVirtual user interface method and system thereof
EP1837665A2 (en)2006-03-202007-09-26Tektronix, Inc.Waveform compression and display
DE102007015495A1 (en)2006-03-312007-10-04Denso Corp., KariyaControl object e.g. driver`s finger, detection device for e.g. vehicle navigation system, has illumination section to illuminate one surface of object, and controller to control illumination of illumination and image recording sections
US20070238956A1 (en)2005-12-222007-10-11Gabriel HarasImaging device and method for operating an imaging device
WO2007137093A2 (en)2006-05-162007-11-29MadentecSystems and methods for a hands free mouse
US7308112B2 (en)2004-05-142007-12-11Honda Motor Co., Ltd.Sign based human-machine interaction
US20080013826A1 (en)2006-07-132008-01-17Northrop Grumman CorporationGesture recognition interface system
US20080019576A1 (en)2005-09-162008-01-24Blake SenftnerPersonalizing a Video
US20080031492A1 (en)2006-07-102008-02-07Fondazione Bruno KesslerMethod and apparatus for tracking a number of objects or object parts in image sequences
US20080030429A1 (en)2006-08-072008-02-07International Business Machines CorporationSystem and method of enhanced virtual reality
US7340077B2 (en)2002-02-152008-03-04Canesta, Inc.Gesture recognition system using depth perceptive sensors
US20080056752A1 (en)2006-05-222008-03-06Denton Gary AMultipath Toner Patch Sensor for Use in an Image Forming Device
US20080064954A1 (en)2006-08-242008-03-13Baylor College Of MedicineMethod of measuring propulsion in lymphatic structures
US20080106637A1 (en)2006-11-072008-05-08Fujifilm CorporationPhotographing apparatus and photographing method
US20080106746A1 (en)2005-10-112008-05-08Alexander ShpuntDepth-varying light fields for three dimensional sensing
US20080110994A1 (en)2000-11-242008-05-15Knowles C HMethod of illuminating objects during digital image capture operations by mixing visible and invisible spectral illumination energy at point of sale (POS) environments
US20080111710A1 (en)2006-11-092008-05-15Marc BoillotMethod and Device to Control Touchless Recognition
US20080118091A1 (en)2006-11-162008-05-22Motorola, Inc.Alerting system for a communication device
US20080126937A1 (en)2004-10-052008-05-29Sony France S.A.Content-Management Interface
US20080187175A1 (en)2007-02-072008-08-07Samsung Electronics Co., Ltd.Method and apparatus for tracking object, and method and apparatus for calculating object pose information
JP2008227569A (en)2007-03-082008-09-25Seiko Epson Corp Imaging apparatus, electronic device, imaging control method, and imaging control program
US20080244468A1 (en)2006-07-132008-10-02Nishihara H KeithGesture Recognition Interface System with Vertical Display
US20080246759A1 (en)2005-02-232008-10-09Craig SummersAutomatic Scene Modeling for the 3D Camera and 3D Video
US20080273764A1 (en)2004-06-292008-11-06Koninklijke Philips Electronics, N.V.Personal Gesture Signature
US20080278589A1 (en)2007-05-112008-11-13Karl Ola ThornMethods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
TW200844871A (en)2007-01-122008-11-16IbmControlling resource access based on user gesturing in a 3D captured image stream of the user
US20080291160A1 (en)2007-05-092008-11-27Nintendo Co., Ltd.System and method for recognizing multi-axis gestures based on handheld controller accelerometer outputs
US20080304740A1 (en)2007-06-062008-12-11Microsoft CorporationSalient Object Detection
US20080319356A1 (en)2005-09-222008-12-25Cain Charles APulsed cavitational ultrasound therapy
US20090002489A1 (en)2007-06-292009-01-01Fuji Xerox Co., Ltd.Efficient tracking multiple objects through occlusion
US7483049B2 (en)1998-11-202009-01-27Aman James AOptimizations for live event, real-time, 3D object tracking
JP2009031939A (en)2007-07-252009-02-12Advanced Telecommunication Research Institute International Image processing apparatus, image processing method, and image processing program
US20090093307A1 (en)2007-10-082009-04-09Sony Computer Entertainment America Inc.Enhanced game controller
US7519223B2 (en)2004-06-282009-04-14Microsoft CorporationRecognizing gestures and using gestures for interacting with software applications
US20090103780A1 (en)2006-07-132009-04-23Nishihara H KeithHand-Gesture Recognition Method
US20090102840A1 (en)2004-07-152009-04-23You Fu LiSystem and method for 3d measurement and surface reconstruction
US20090116742A1 (en)2007-11-012009-05-07H Keith NishiharaCalibration of a Gesture Recognition Interface System
US7532206B2 (en)2003-03-112009-05-12Smart Technologies UlcSystem and method for differentiating between pointers used to contact touch surface
US20090122146A1 (en)2002-07-272009-05-14Sony Computer Entertainment Inc.Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US7536032B2 (en)2003-10-242009-05-19Reactrix Systems, Inc.Method and system for processing captured image information in an interactive video display system
US20090128564A1 (en)2007-11-152009-05-21Canon Kabushiki KaishaImage processing apparatus and image processing method
US7542586B2 (en)2001-03-132009-06-02Johnson Raymond CTouchless identification system for monitoring hand washing or application of a disinfectant
US20090153655A1 (en)2007-09-252009-06-18Tsukasa IkeGesture recognition apparatus and method thereof
US20090203993A1 (en)2005-04-262009-08-13Novadaq Technologies Inc.Real time imagining during solid organ transplant
US20090203994A1 (en)2005-04-262009-08-13Novadaq Technologies Inc.Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090217211A1 (en)2008-02-272009-08-27Gesturetek, Inc.Enhanced input using recognized gestures
US7598942B2 (en)2005-02-082009-10-06Oblong Industries, Inc.System and method for gesture based control system
US20090257623A1 (en)2008-04-152009-10-15Cyberlink CorporationGenerating effects in a webcam application
US7606417B2 (en)2004-08-162009-10-20Fotonation Vision LimitedForeground/background segmentation in digital images with differential exposure calculations
CN201332447Y (en)2008-10-222009-10-21康佳集团股份有限公司Television for controlling or operating game through gesture change
US20090309710A1 (en)2005-04-282009-12-17Aisin Seiki Kabushiki KaishaVehicle Vicinity Monitoring System
US20100001998A1 (en)2004-01-302010-01-07Electronic Scripting Products, Inc.Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features
US7646372B2 (en)2003-09-152010-01-12Sony Computer Entertainment Inc.Methods and systems for enabling direction detection when interfacing with a computer program
US20100013832A1 (en)2008-07-162010-01-21Jing XiaoModel-Based Object Image Processing
WO2010007662A1 (en)2008-07-152010-01-21イチカワ株式会社Heat-resistant cushion material for forming press
US20100013662A1 (en)2008-07-172010-01-21Michael StudeProduct locating system
US20100020078A1 (en)2007-01-212010-01-28Prime Sense LtdDepth mapping using multi-beam illumination
US20100023015A1 (en)2008-07-232010-01-28Otismed CorporationSystem and method for manufacturing arthroplasty jigs having improved mating accuracy
US7656372B2 (en)2004-02-252010-02-02Nec CorporationMethod for driving liquid crystal display device having a display pixel region and a dummy pixel region
US20100026963A1 (en)2008-08-012010-02-04Andreas FaulstichOptical projection grid, scanning camera comprising an optical projection grid and method for generating an optical projection grid
US20100027845A1 (en)2008-07-312010-02-04Samsung Electronics Co., Ltd.System and method for motion detection based on object trajectory
US7665041B2 (en)2003-03-252010-02-16Microsoft CorporationArchitecture for controlling a computer using hand gestures
US20100046842A1 (en)2008-08-192010-02-25Conwell William YMethods and Systems for Content Processing
US20100053209A1 (en)2008-08-292010-03-04Siemens Medical Solutions Usa, Inc.System for Processing Medical Image data to Provide Vascular Function Information
US20100053612A1 (en)2008-09-032010-03-04National Central UniversityMethod and apparatus for scanning hyper-spectral image
US20100058252A1 (en)2008-08-282010-03-04Acer IncorporatedGesture guide system and a method for controlling a computer system by a gesture
US20100053164A1 (en)2008-09-022010-03-04Samsung Electronics Co., LtdSpatially correlated rendering of three-dimensional content on display components having arbitrary positions
US20100066737A1 (en)2008-09-162010-03-18Yuyu LiuDynamic-state estimating apparatus, dynamic-state estimating method, and program
US20100066676A1 (en)2006-02-082010-03-18Oblong Industries, Inc.Gestural Control of Autonomous and Semi-Autonomous Systems
US20100066975A1 (en)2006-11-292010-03-18Bengt RehnstromEye tracking illumination
WO2010032268A2 (en)2008-09-192010-03-25Avinash SaxenaSystem and method for controlling graphical objects
US7692625B2 (en)2000-07-052010-04-06Smart Technologies UlcCamera-based touch system
US20100091110A1 (en)2008-10-102010-04-15Gesturetek, Inc.Single camera tracker
US20100095206A1 (en)2008-10-132010-04-15Lg Electronics Inc.Method for providing a user interface using three-dimensional gestures and an apparatus using the same
US20100118123A1 (en)2007-04-022010-05-13Prime Sense LtdDepth mapping using projected patterns
US20100121189A1 (en)2008-11-122010-05-13Sonosite, Inc.Systems and methods for image presentation for medical examination and interventional procedures
US20100125815A1 (en)2008-11-192010-05-20Ming-Jen WangGesture-based control method for interactive screen control
US20100127995A1 (en)2008-11-262010-05-27Panasonic CorporationSystem and method for differentiating between intended and unintended user input on a touchpad
CN101729808A (en)2008-10-142010-06-09Tcl集团股份有限公司Remote control method for television and system for remotely controlling television by same
US20100141762A1 (en)2006-11-202010-06-10Jon SiannWireless Network Camera Systems
US20100158372A1 (en)2008-12-222010-06-24Electronics And Telecommunications Research InstituteApparatus and method for separating foreground and background
US20100162165A1 (en)2008-12-222010-06-24Apple Inc.User Interface Tools
WO2010076622A1 (en)2008-12-302010-07-08Nokia CorporationMethod, apparatus and computer program product for providing hand segmentation for gesture analysis
US20100177929A1 (en)2009-01-122010-07-15Kurtz Andrew FEnhanced safety during laser projection
US20100194863A1 (en)2009-02-022010-08-05Ydreams - Informatica, S.A.Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images
US20100199232A1 (en)2009-02-032010-08-05Massachusetts Institute Of TechnologyWearable Gestural Interface
US20100199221A1 (en)2009-01-302010-08-05Microsoft CorporationNavigation of a virtual plane using depth
WO2010088035A2 (en)2009-01-302010-08-05Microsoft CorporationGesture recognizer system architecture
US20100201880A1 (en)2007-04-132010-08-12Pioneer CorporationShot size identifying apparatus and method, electronic apparatus, and computer program
US20100208942A1 (en)2009-02-192010-08-19Sony CorporationImage processing device and method
US20100222102A1 (en)2009-02-052010-09-02Rodriguez Tony FSecond Screens and Widgets
US20100219934A1 (en)2009-02-272010-09-02Seiko Epson CorporationSystem of controlling device in response to gesture
US20100248836A1 (en)2009-03-302010-09-30Nintendo Co., Ltd.Computer readable storage medium having game program stored thereon and game apparatus
US20100264833A1 (en)2007-11-082010-10-21Tony Petrus Van EndertContinuous control of led light beam position and focus based on selection and intensity control
US20100275159A1 (en)2009-04-232010-10-28Takashi MatsubaraInput device
US20100277411A1 (en)2009-05-012010-11-04Microsoft CorporationUser tracking feedback
US7831932B2 (en)2002-03-082010-11-09Revelations in Design, Inc.Electric device control apparatus and methods for making and using same
US7840031B2 (en)2007-01-122010-11-23International Business Machines CorporationTracking a range of body movement based on 3D captured image streams of a user
US20100296698A1 (en)2009-05-252010-11-25Visionatics Inc.Motion object detection method using adaptive background model and computer-readable storage medium
US20100303298A1 (en)2002-07-272010-12-02Sony Computer Entertainment Inc.Selective sound source listening in conjunction with computer interactive processing
US20100306712A1 (en)2009-05-292010-12-02Microsoft CorporationGesture Coach
US20100302357A1 (en)2009-05-262010-12-02Che-Hao HsuGesture-based remote control system
WO2010138741A1 (en)2009-05-272010-12-02Analog Devices, Inc.Position measurement systems using position sensitive detectors
US20100302015A1 (en)2009-05-292010-12-02Microsoft CorporationSystems and methods for immersive interaction with virtual objects
US20100309097A1 (en)2009-06-042010-12-09Roni RavivHead mounted 3d display
US20100321377A1 (en)2009-06-232010-12-23Disney Enterprises, Inc. (Burbank, Ca)System and method for integrating multiple virtual rendering systems to provide an augmented reality
WO2010148155A2 (en)2009-06-162010-12-23Microsoft CorporationSurface computer user interaction
CN101930610A (en)2009-06-262010-12-29思创影像科技股份有限公司 Moving Object Detection Method Using Adaptive Background Model
JP2011010258A (en)2009-05-272011-01-13Seiko Epson CorpImage processing apparatus, image display system, and image extraction device
US20110007072A1 (en)2009-07-092011-01-13University Of Central Florida Research Foundation, Inc.Systems and methods for three-dimensionally modeling moving objects
CN101951474A (en)2010-10-122011-01-19冠捷显示科技(厦门)有限公司Television technology based on gesture control
US20110025818A1 (en)2006-11-072011-02-03Jonathan GallmeierSystem and Method for Controlling Presentations and Videoconferences Using Hand Motions
US20110026765A1 (en)2009-07-312011-02-03Echostar Technologies L.L.C.Systems and methods for hand gesture control of an electronic device
US20110043806A1 (en)2008-04-172011-02-24Avishay GuettaIntrusion warning system
WO2011024193A2 (en)2009-08-202011-03-03Natarajan KannanElectronically variable field of view (fov) infrared illuminator
US20110057875A1 (en)2009-09-042011-03-10Sony CorporationDisplay control apparatus, display control method, and display control program
US20110066984A1 (en)*2009-09-162011-03-17Google Inc.Gesture Recognition on Computing Device
WO2011036618A2 (en)2009-09-222011-03-31Pebblestech Ltd.Remote control of computer devices
US20110080490A1 (en)2009-10-072011-04-07Gesturetek, Inc.Proximity object tracker
US20110080337A1 (en)2009-10-052011-04-07Hitachi Consumer Electronics Co., Ltd.Image display device and display control method thereof
US20110080470A1 (en)2009-10-022011-04-07Kabushiki Kaisha ToshibaVideo reproduction apparatus and video reproduction method
WO2011044680A1 (en)2009-10-132011-04-21Recon Instruments Inc.Control systems and methods for head-mounted information systems
WO2011045789A1 (en)2009-10-132011-04-21Pointgrab Ltd.Computer vision gesture based control of a device
US20110093820A1 (en)2009-10-192011-04-21Microsoft CorporationGesture personalization and profile roaming
US20110107216A1 (en)2009-11-032011-05-05Qualcomm IncorporatedGesture-based user interface
US7940885B2 (en)2005-11-092011-05-10Dexela LimitedMethods and apparatus for obtaining low-dose imaging
CN102053702A (en)2010-10-262011-05-11南京航空航天大学Dynamic gesture control system and method
US20110115486A1 (en)2008-04-182011-05-19Universitat ZurichTravelling-wave nuclear magnetic resonance method
US20110119640A1 (en)2009-11-192011-05-19Microsoft CorporationDistance scalable no touch computing
US20110116684A1 (en)2007-12-212011-05-19Coffman Thayne RSystem and method for visually tracking with occlusions
US7948493B2 (en)2005-09-302011-05-24Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and computer program for determining information about shape and/or location of an ellipse in a graphical image
JP2011107681A (en)2009-07-172011-06-02Nikon CorpFocusing device and camera
CN201859393U (en)2010-04-132011-06-08任峰Three-dimensional gesture recognition box
US20110134112A1 (en)2009-12-082011-06-09Electronics And Telecommunications Research InstituteMobile terminal having gesture recognition function and interface system using the same
US7961174B1 (en)2010-01-152011-06-14Microsoft CorporationTracking groups of users in motion capture system
US7961934B2 (en)2003-12-112011-06-14Strider Labs, Inc.Probable reconstruction of surfaces in occluded regions by computed symmetry
US20110148875A1 (en)2009-12-182011-06-23Electronics And Telecommunications Research InstituteMethod and apparatus for capturing motion of dynamic object
RU2422878C1 (en)2010-02-042011-06-27Владимир Валентинович ДевятковMethod of controlling television using multimodal interface
US20110169726A1 (en)2010-01-082011-07-14Microsoft CorporationEvolving universal gesture sets
US20110173574A1 (en)2010-01-082011-07-14Microsoft CorporationIn application gesture interpretation
US7980885B2 (en)2008-03-032011-07-19Amad Mennekes Holding Gmbh & Co. KgPlug assembly with strain relief
US20110176146A1 (en)2008-09-022011-07-21Cristina Alvarez DiezDevice and method for measuring a surface
US20110181509A1 (en)2010-01-262011-07-28Nokia CorporationGesture Control
US20110193778A1 (en)2010-02-052011-08-11Samsung Electronics Co., Ltd.Device and method for controlling mouse pointer
US20110205151A1 (en)2009-12-042011-08-25John David NewtonMethods and Systems for Position Detection
US20110213664A1 (en)2010-02-282011-09-01Osterhout Group, Inc.Local advertising content on an interactive head-mounted eyepiece
US8023698B2 (en)2007-03-302011-09-20Denso CorporationInformation device operation apparatus
US20110228978A1 (en)2010-03-182011-09-22Hon Hai Precision Industry Co., Ltd.Foreground object detection system and method
EP2369443A2 (en)2010-03-252011-09-28User Interface in Sweden ABSystem and method for gesture detection and feedback
CN102201121A (en)2010-03-232011-09-28鸿富锦精密工业(深圳)有限公司System and method for detecting article in video scene
WO2011119154A1 (en)2010-03-242011-09-29Hewlett-Packard Development Company, L.P.Gesture mapping for display device
US20110234840A1 (en)2008-10-232011-09-29Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and computer program for recognizing a gesture in a picture, and apparatus, method and computer program for controlling a device
US20110243451A1 (en)2010-03-302011-10-06Hideki OyaizuImage processing apparatus and method, and program
US8035624B2 (en)2002-05-282011-10-11Intellectual Ventures Holding 67 LlcComputer vision based touch screen
US20110251896A1 (en)2010-04-092011-10-13Affine Systems, Inc.Systems and methods for matching an advertisement to a video
EP2378488A2 (en)2010-04-192011-10-19Ydreams - Informática, S.a.Various methods and apparatuses for achieving augmented reality
US8045825B2 (en)2007-04-252011-10-25Canon Kabushiki KaishaImage processing apparatus and method for composition of real space images and virtual space images
US20110261178A1 (en)2008-10-152011-10-27The Regents Of The University Of CaliforniaCamera system with autonomous miniature camera and light source assembly and method for image enhancement
US20110267259A1 (en)2010-04-302011-11-03Microsoft CorporationReshapable connector with variable rigidity
US20110267265A1 (en)2010-04-302011-11-03Verizon Patent And Licensing, Inc.Spatial-input-based cursor projection systems and methods
GB2480140A (en)2010-05-042011-11-09Timocco LtdTracking and Mapping an Object to a Target
CN102236412A (en)2010-04-302011-11-09宏碁股份有限公司 Three-dimensional gesture recognition system and vision-based gesture recognition method
US20110279397A1 (en)2009-01-262011-11-17Zrro Technologies (2009) Ltd.Device and method for monitoring the object's behavior
US8064704B2 (en)2006-10-112011-11-22Samsung Electronics Co., Ltd.Hand gesture recognition input system and method for a mobile phone
US20110289455A1 (en)2010-05-182011-11-24Microsoft CorporationGestures And Gesture Recognition For Manipulating A User-Interface
US20110289456A1 (en)2010-05-182011-11-24Microsoft CorporationGestures And Gesture Modifiers For Manipulating A User-Interface
US20110286676A1 (en)2010-05-202011-11-24Edge3 Technologies LlcSystems and related methods for three dimensional gesture recognition in vehicles
US20110296353A1 (en)2009-05-292011-12-01Canesta, Inc.Method and system implementing user-centric gesture control
US20110291988A1 (en)2009-09-222011-12-01Canesta, Inc.Method and system for recognition of user gesture interaction with passive surface video displays
US20110291925A1 (en)2009-02-022011-12-01Eyesight Mobile Technologies Ltd.System and method for object recognition and tracking in a video stream
US20110299737A1 (en)2010-06-042011-12-08Acer IncorporatedVision-based hand movement recognition system and method thereof
KR101092909B1 (en)2009-11-272011-12-12(주)디스트릭트홀딩스Gesture Interactive Hologram Display Appatus and Method
US20110304650A1 (en)2010-06-092011-12-15The Boeing CompanyGesture-Based Human Machine Interface
US20110304600A1 (en)2010-06-112011-12-15Seiko Epson CorporationOptical position detecting device and display device with position detecting function
US20110310220A1 (en)2010-06-162011-12-22Microsoft CorporationDepth camera illuminator with superluminescent light-emitting diode
US20110310007A1 (en)2010-06-222011-12-22Microsoft CorporationItem navigation using motion-capture data
US20110314427A1 (en)2010-06-182011-12-22Samsung Electronics Co., Ltd.Personalization using custom gestures
US8086971B2 (en)2006-06-282011-12-27Nokia CorporationApparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US8085339B2 (en)2004-01-162011-12-27Sony Computer Entertainment Inc.Method and apparatus for optimizing capture device settings through depth information
US20110317871A1 (en)2010-06-292011-12-29Microsoft CorporationSkeletal joint recognition and tracking system
US8112719B2 (en)2009-05-262012-02-07Topseed Technology Corp.Method for controlling gesture-based remote control system
US8111239B2 (en)1997-08-222012-02-07Motion Games, LlcMan machine interfaces and applications
US20120038637A1 (en)2003-05-292012-02-16Sony Computer Entertainment Inc.User-driven three-dimensional interactive gaming environment
WO2012027422A2 (en)2010-08-242012-03-01Qualcomm IncorporatedMethods and apparatus for interacting with an electronic device application by moving an object in the air over an electronic device display
US20120065499A1 (en)2009-05-202012-03-15Hitachi Medical CorporationMedical image diagnosis device and region-of-interest setting method therefore
US20120068914A1 (en)2010-09-202012-03-22Kopin CorporationMiniature communications gateway for head mounted display
US8144233B2 (en)2007-10-032012-03-27Sony CorporationDisplay control device, display control method, and display control program for superimposing images to create a composite image
JP4906960B2 (en)2009-12-172012-03-28株式会社エヌ・ティ・ティ・ドコモ Method and apparatus for interaction between portable device and screen
WO2012039140A1 (en)2010-09-222012-03-29島根県Operation input apparatus, operation input method, and program
US20120098744A1 (en)2010-10-212012-04-26Verizon Patent And Licensing, Inc.Systems, methods, and apparatuses for spatial input associated with a display
US20120113316A1 (en)2009-07-172012-05-10Nikon CorporationFocusing device and camera
US20120113223A1 (en)2010-11-052012-05-10Microsoft CorporationUser Interaction in Augmented Reality
US20120159380A1 (en)2010-12-202012-06-21Kocienda Kenneth LDevice, Method, and Graphical User Interface for Navigation of Concurrently Open Software Applications
US20120163675A1 (en)2010-12-222012-06-28Electronics And Telecommunications Research InstituteMotion capture apparatus and method
US8218858B2 (en)2005-01-072012-07-10Qualcomm IncorporatedEnhanced object reconstruction
US8229134B2 (en)2007-05-242012-07-24University Of MarylandAudio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
US20120194517A1 (en)2011-01-312012-08-02Microsoft CorporationUsing a Three-Dimensional Environment Model in Gameplay
US8235529B1 (en)2011-11-302012-08-07Google Inc.Unlocking a screen using eye tracking information
US20120204133A1 (en)2009-01-132012-08-09Primesense Ltd.Gesture-Based User Interface
US8244233B2 (en)2009-02-232012-08-14Augusta Technology, Inc.Systems and methods for operating a virtual whiteboard using a mobile phone device
US8249345B2 (en)2008-06-272012-08-21Mako Surgical Corp.Automatic image segmentation using contour propagation
US20120218263A1 (en)2009-10-122012-08-30Metaio GmbhMethod for representing virtual information in a view of a real environment
US20120223959A1 (en)2011-03-012012-09-06Apple Inc.System and method for a touchscreen slider with toggle control
US8270669B2 (en)2008-02-062012-09-18Denso CorporationApparatus for extracting operating object and apparatus for projecting operating hand
US20120236288A1 (en)2009-12-082012-09-20Qinetiq LimitedRange Based Sensing
US20120250936A1 (en)2011-03-312012-10-04Smart Technologies UlcInteractive input system and method
US8289162B2 (en)2008-12-222012-10-16Wimm Labs, Inc.Gesture-based user interface for a wearable portable device
US20120270654A1 (en)2011-01-052012-10-25Qualcomm IncorporatedMethod and apparatus for scaling gesture recognition to physical dimensions of a user
US20120274781A1 (en)2011-04-292012-11-01Siemens CorporationMarginal space learning for multi-person tracking over mega pixel imagery
JP2012527145A (en)2009-05-122012-11-01コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Camera, system having camera, method of operating camera, and method of deconvolving recorded image
US8304727B2 (en)2009-02-062012-11-06Siliconfile Technologies Inc.Image sensor capable of judging proximity to subject
US20120281873A1 (en)2011-05-052012-11-08International Business Machines CorporationIncorporating video meta-data in 3d models
US20120293667A1 (en)2011-05-162012-11-22Ut-Battelle, LlcIntrinsic feature-based pose measurement for imaging motion compensation
US8319832B2 (en)2008-01-312012-11-27Denso CorporationInput apparatus and imaging apparatus
US20120314030A1 (en)2011-06-072012-12-13International Business Machines CorporationEstimation of object properties in 3d world
US20120320080A1 (en)2011-06-142012-12-20Microsoft CorporationMotion based virtual object navigation
US20130019204A1 (en)2011-07-142013-01-17Microsoft CorporationAdjusting content attributes through actions on context based menu
US8363010B2 (en)2007-03-232013-01-29Denso CorporationOperating input device for reducing input error
US20130033483A1 (en)2011-08-012013-02-07Soungmin ImElectronic device for displaying three-dimensional image and method of using the same
US20130038694A1 (en)2010-04-272013-02-14Sanjay NichaniMethod for moving object detection using an image sensor and structured light
US20130044951A1 (en)2011-08-192013-02-21Der-Chun CherngMoving object detection method using image contrast enhancement
US20130050425A1 (en)2011-08-242013-02-28Soungmin ImGesture-based user interface method and apparatus
US20130057469A1 (en)2010-05-112013-03-07Nippon Systemware Co LtdGesture recognition device, method, program, and computer-readable medium upon which program is stored
US8395600B2 (en)2009-01-302013-03-12Denso CorporationUser interface device
US20130086531A1 (en)2011-09-292013-04-04Kabushiki Kaisha ToshibaCommand issuing device, method and computer program product
US20130097566A1 (en)2011-10-172013-04-18Carl Fredrik Alexander BERGLUNDSystem and method for displaying items on electronic devices
US8432377B2 (en)2007-08-302013-04-30Next Holdings LimitedOptical touchscreen with improved illumination
US20130120319A1 (en)2005-10-312013-05-16Extreme Reality Ltd.Methods, Systems, Apparatuses, Circuits and Associated Computer Executable Code for Detecting Motion, Position and/or Orientation of Objects Within a Defined Spatial Region
US20130148852A1 (en)2011-12-082013-06-13Canon Kabushiki KaishaMethod, apparatus and system for tracking an object in a sequence of images
US8471848B2 (en)2007-03-022013-06-25Organic Motion, Inc.System and method for tracking three dimensional objects
US20130182079A1 (en)2012-01-172013-07-18OcuspecMotion capture using cross-sections of an object
US20130182897A1 (en)2012-01-172013-07-18David HolzSystems and methods for capturing motion in three-dimensional space
WO2013109609A2 (en)2012-01-172013-07-25Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US20130187952A1 (en)2010-10-102013-07-25Rafael Advanced Defense Systems Ltd.Network-based real time registered augmented reality for mobile devices
US20130191911A1 (en)2012-01-202013-07-25Apple Inc.Device, Method, and Graphical User Interface for Accessing an Application in a Locked Device
US20130194173A1 (en)2012-02-012013-08-01Ingeonix CorporationTouch free control of electronic systems and associated methods
US20130208948A1 (en)2010-10-242013-08-15Rafael Advanced Defense Systems Ltd.Tracking and identification of a moving object from a moving sensor using a 3d model
US8514221B2 (en)2010-01-052013-08-20Apple Inc.Working with 3D objects
US20130222640A1 (en)2012-02-272013-08-29Samsung Electronics Co., Ltd.Moving image shooting apparatus and method of using a camera device
US20130222233A1 (en)2012-02-292013-08-29Korea Institute Of Science And TechnologySystem and method for implementing 3-dimensional user interface
US20130239059A1 (en)2012-03-062013-09-12Acer IncorporatedTouch screen folder control
US20130241832A1 (en)2010-09-072013-09-19Zrro Technologies (2009) Ltd.Method and device for controlling the behavior of virtual objects on a display
US20130252691A1 (en)2012-03-202013-09-26Ilias AlexopoulosMethods and systems for a gesture-controlled lottery terminal
US20130258140A1 (en)2012-03-102013-10-03Digitaloptics CorporationMiniature MEMS Autofocus Zoom Camera Module
US20130257736A1 (en)2012-04-032013-10-03Wistron CorporationGesture sensing apparatus, electronic system having gesture input function, and gesture determining method
US8553037B2 (en)2002-08-142013-10-08Shawn SmithDo-It-Yourself photo realistic talking head creation system and method
US20130271397A1 (en)2012-04-162013-10-17Qualcomm IncorporatedRapid gesture re-engagement
US20130283213A1 (en)2012-03-262013-10-24Primesense Ltd.Enhanced virtual touchpad
US8582809B2 (en)2010-06-282013-11-12Robert Bosch GmbhMethod and device for detecting an interfering object in a camera image
US20130300831A1 (en)2012-05-112013-11-14Loren MavromatisCamera scene fitting of real world scenes
US20130307935A1 (en)2011-02-012013-11-21National University Of SingaporeImaging system and method
US8593417B2 (en)2009-04-302013-11-26Denso CorporationOperation apparatus for in-vehicle electronic device and method for controlling the same
US20130321265A1 (en)*2011-02-092013-12-05Primesense Ltd.Gaze-Based Display Control
US20140002365A1 (en)2012-06-282014-01-02Intermec Ip Corp.Dual screen display for mobile computing device
US20140010441A1 (en)2012-07-092014-01-09Qualcomm IncorporatedUnsupervised movement detection and gesture recognition
US8631355B2 (en)2010-01-082014-01-14Microsoft CorporationAssigning gesture dictionaries
US20140015831A1 (en)2012-07-162014-01-16Electronics And Telecommunications Research InstitudeApparatus and method for processing manipulation of 3d virtual object
DE102007015497B4 (en)2006-03-312014-01-23Denso Corporation Speech recognition device and speech recognition program
US8638989B2 (en)2012-01-172014-01-28Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US20140037135A1 (en)2012-07-312014-02-06Omek Interactive, Ltd.Context-driven adjustment of camera parameters
US8659658B2 (en)2010-02-092014-02-25Microsoft CorporationPhysical interaction zone for gesture-based user interfaces
US20140055385A1 (en)2011-09-272014-02-27Elo Touch Solutions, Inc.Scaling of gesture based input
US20140055396A1 (en)2012-08-272014-02-27Microchip Technology IncorporatedInput Device with Hand Posture Control
US20140064566A1 (en)2012-08-292014-03-06Xerox CorporationHeuristic-based approach for automatic payment gesture classification and detection
US20140063060A1 (en)2012-09-042014-03-06Qualcomm IncorporatedAugmented reality surface segmentation
US20140063055A1 (en)2010-02-282014-03-06Osterhout Group, Inc.Ar glasses specific user interface and control interface based on a connected external device type
US20140071069A1 (en)2011-03-292014-03-13Glen J. AndersonTechniques for touch and non-touch user interaction input
US20140081521A1 (en)2009-02-152014-03-20Neonode Inc.Light-based touch controls on a steering wheel and dashboard
US20140085203A1 (en)2012-09-262014-03-27Seiko Epson CorporationVideo image display system and head mounted display
US20140095119A1 (en)2011-06-132014-04-03Industry-Academic Cooperation Foundation, Yonsei UniversitySystem and method for location-based construction project management
US8693731B2 (en)2012-01-172014-04-08Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US20140098018A1 (en)2012-10-042014-04-10Microsoft CorporationWearable sensor for tracking articulated body-parts
US20140125813A1 (en)2012-11-082014-05-08David HolzObject detection and tracking with variable-field illumination devices
US20140125775A1 (en)2012-11-082014-05-08Leap Motion, Inc.Three-dimensional image sensors
US20140134733A1 (en)2012-11-132014-05-15The Board Of Trustees Of The Leland Stanford Junior UniversityChemically defined production of cardiomyocytes from pluripotent stem cells
US20140132738A1 (en)2011-08-232014-05-15Panasonic CorporationThree-dimensional image capture device, lens control device and program
US20140139425A1 (en)2012-11-192014-05-22Sony CorporationImage processing apparatus, image processing method, image capture apparatus and computer program
US8738523B1 (en)2013-03-152014-05-27State Farm Mutual Automobile Insurance CompanySystems and methods to identify and profile a vehicle operator
US8744122B2 (en)2008-10-222014-06-03Sri InternationalSystem and method for object detection from a moving platform
US20140157135A1 (en)2012-12-032014-06-05Samsung Electronics Co., Ltd.Method and mobile terminal for controlling bluetooth low energy device
US20140161311A1 (en)2012-12-102014-06-12Hyundai Motor CompanySystem and method for object image detecting
US20140168062A1 (en)2012-12-132014-06-19Eyesight Mobile Technologies Ltd.Systems and methods for triggering actions based on touch-free gesture detection
US20140176420A1 (en)2012-12-262014-06-26Futurewei Technologies, Inc.Laser Beam Based Gesture Control Interface for Mobile Devices
US8768022B2 (en)2006-11-162014-07-01Vanderbilt UniversityApparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same
US20140189579A1 (en)2013-01-022014-07-03Zrro Technologies (2009) Ltd.System and method for controlling zooming and/or scrolling
US20140192024A1 (en)2013-01-082014-07-10Leap Motion, Inc.Object detection and tracking with audio and optical signals
US20140201666A1 (en)2013-01-152014-07-17Raffi BedikianDynamic, free-space user interactions for machine control
US20140201689A1 (en)2013-01-152014-07-17Raffi BedikianFree-space user interface and control using virtual constructs
US20140222385A1 (en)2011-02-252014-08-07Smith Heimann GmbhImage reconstruction based on parametric models
US20140223385A1 (en)2013-02-052014-08-07Qualcomm IncorporatedMethods for system engagement via 3d object detection
US8803885B1 (en)2011-09-072014-08-12Infragistics, Inc.Method for evaluating spline parameters for smooth curve sampling
US20140225826A1 (en)2011-09-072014-08-14Nitto Denko CorporationMethod for detecting motion of input body and input device using same
US20140225918A1 (en)2013-02-142014-08-14Qualcomm IncorporatedHuman-body-gesture-based region and volume selection for hmd
US20140236529A1 (en)2013-02-182014-08-21Motorola Mobility LlcMethod and Apparatus for Determining Displacement from Acceleration Data
US8817087B2 (en)2010-11-012014-08-26Robert Bosch GmbhRobust video-based handwriting and gesture recognition for in-car applications
US20140240225A1 (en)2013-02-262014-08-28Pointgrab Ltd.Method for touchless control of a device
US20140240215A1 (en)2013-02-262014-08-28Corel CorporationSystem and method for controlling a user interface utility using a vision system
US20140248950A1 (en)2013-03-012014-09-04Martin Tosas BautistaSystem and method of interaction for mobile devices
US20140249961A1 (en)2013-03-042014-09-04Adidas AgInteractive cubicle and method for determining a body shape
US20140253512A1 (en)2013-03-112014-09-11Hitachi Maxell, Ltd.Manipulation detection apparatus, manipulation detection method, and projector
US20140253785A1 (en)2013-03-072014-09-11Mediatek Inc.Auto Focus Based on Analysis of State or State Change of Image Content
US20140267098A1 (en)2013-03-152014-09-18Lg Electronics Inc.Mobile terminal and method of controlling the mobile terminal
US20140282282A1 (en)2013-03-152014-09-18Leap Motion, Inc.Dynamic user interactions for display control
US8842084B2 (en)2010-09-082014-09-23Telefonaktiebolaget L M Ericsson (Publ)Gesture-based object manipulation methods and devices
US8854433B1 (en)2012-02-032014-10-07Aquifi, Inc.Method and system enabling natural user interface gestures with an electronic system
US20140307920A1 (en)2013-04-122014-10-16David HolzSystems and methods for tracking occluded objects in three-dimensional space
US20140320408A1 (en)2013-04-262014-10-30Leap Motion, Inc.Non-tactile interface systems and methods
US8878749B1 (en)2012-01-062014-11-04Google Inc.Systems and methods for position estimation
US8891868B1 (en)2011-08-042014-11-18Amazon Technologies, Inc.Recognizing gestures captured by video
US20140344762A1 (en)2013-05-142014-11-20Qualcomm IncorporatedAugmented reality (ar) capture & play
US8907982B2 (en)2008-12-032014-12-09Alcatel LucentMobile device for augmented reality applications
US20140364209A1 (en)2013-06-072014-12-11Sony Corporation Entertainment America LLCSystems and Methods for Using Reduced Hops to Generate an Augmented Virtual Reality Scene Within A Head Mounted System
US20140364212A1 (en)2013-06-082014-12-11Sony Computer Entertainment Inc.Systems and methods for transitioning between transparent mode and non-transparent mode in a head mounted dipslay
US20140369558A1 (en)2012-01-172014-12-18David HolzSystems and methods for machine control
US20140375547A1 (en)2012-03-132014-12-25Eyesight Mobile Technologies Ltd.Touch free user interface
US8922590B1 (en)2013-10-012014-12-30Myth Innovations, Inc.Augmented reality interface and method of use
WO2014208087A1 (en)2013-06-272014-12-31パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカMotion sensor device having plurality of light sources
US20150003673A1 (en)2013-07-012015-01-01Hand Held Products, Inc.Dimensioning system
US20150009149A1 (en)2012-01-052015-01-08California Institute Of TechnologyImaging surround system for touch-free display control
US20150016777A1 (en)2012-06-112015-01-15Magic Leap, Inc.Planar waveguide apparatus with diffraction element(s) and system employing same
US20150022447A1 (en)2013-07-222015-01-22Leap Motion, Inc.Non-linear motion capture using frenet-serret frames
US8942881B2 (en)2012-04-022015-01-27Google Inc.Gesture-based automotive controls
US20150029091A1 (en)2013-07-292015-01-29Sony CorporationInformation presentation apparatus and information processing system
US20150040040A1 (en)2013-08-052015-02-05Alexandru BalanTwo-hand interaction with natural user interface
US20150054729A1 (en)2009-04-022015-02-26David MINNENRemote devices used in a markerless installation of a spatial operating environment incorporating gestural control
WO2015026707A1 (en)2013-08-222015-02-26Sony CorporationClose range natural user interface system and method of operation thereof
US20150084864A1 (en)2012-01-092015-03-26Google Inc.Input Method
US20150097772A1 (en)2012-01-062015-04-09Thad Eugene StarnerGaze Signal Based on Physical Characteristics of the Eye
US20150103004A1 (en)2013-10-162015-04-16Leap Motion, Inc.Velocity field interaction for free space gesture interface and control
US9014414B2 (en)2008-07-292015-04-21Canon Kabushiki KaishaInformation processing apparatus and information processing method for processing image information at an arbitrary viewpoint in a physical space or virtual space
GB2519418A (en)2013-08-212015-04-22Sony Comp Entertainment EuropeHead-mountable apparatus and systems
US20150115802A1 (en)2013-10-312015-04-30General Electric CompanyCustomizable modular luminaire
US20150116214A1 (en)2013-10-292015-04-30Anders Grunnet-JepsenGesture based human computer interaction
US20150153833A1 (en)2012-07-132015-06-04Softkinetic SoftwareMethod and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US9056396B1 (en)2013-03-052015-06-16AutofussProgramming of a robotic arm using a motion capture system
US20150172539A1 (en)2013-12-172015-06-18Amazon Technologies, Inc.Distributing processing for imaging processing
US20150193669A1 (en)2011-11-212015-07-09Pixart Imaging Inc.System and method based on hybrid biometric detection
US20150205358A1 (en)2014-01-202015-07-23Philip Scott LyrenElectronic Device with Touchless User Interface
US20150206321A1 (en)2014-01-232015-07-23Michael J. ScavezzeAutomated content scrolling
US20150205400A1 (en)2014-01-212015-07-23Microsoft CorporationGrip Detection
US20150227795A1 (en)2012-01-062015-08-13Google Inc.Object Outlining to Initiate a Visual Search
US20150234569A1 (en)2010-03-302015-08-20Harman Becker Automotive Systems GmbhVehicle user interface unit for a vehicle electronic device
US9124778B1 (en)2012-08-292015-09-01Nomi CorporationApparatuses and methods for disparity-based tracking and analysis of objects in a region of interest
US9119670B2 (en)2010-04-282015-09-01Ryerson UniversitySystem and methods for intraoperative guidance feedback
US9122354B2 (en)2012-03-142015-09-01Texas Instruments IncorporatedDetecting wave gestures near an illuminated surface
US20150253428A1 (en)2013-03-152015-09-10Leap Motion, Inc.Determining positional information for an object in space
US20150258432A1 (en)2014-03-142015-09-17Sony Computer Entertainment Inc.Gaming device with volumetric sensing
US20150261291A1 (en)2014-03-142015-09-17Sony Computer Entertainment Inc.Methods and Systems Tracking Head Mounted Display (HMD) and Calibrations for HMD Headband Adjustments
US20150293597A1 (en)2012-10-312015-10-15Pranav MISHRAMethod, Apparatus and Computer Program for Enabling a User Input Command to be Performed
US20150304593A1 (en)2012-11-272015-10-22Sony CorporationDisplay apparatus, display method, and computer program
US20150309629A1 (en)2014-04-282015-10-29Qualcomm IncorporatedUtilizing real world objects for user input
US9182838B2 (en)2011-04-192015-11-10Microsoft Technology Licensing, LlcDepth camera-based relative gesture detection
US9182812B2 (en)2013-01-082015-11-10AyotleVirtual sensor systems and methods
US20150323785A1 (en)2012-07-272015-11-12Nissan Motor Co., Ltd.Three-dimensional object detection device and foreign matter detection device
US20150363070A1 (en)2011-08-042015-12-17Itay KatzSystem and method for interfacing with a device via a 3d display
US20160062573A1 (en)2014-09-022016-03-03Apple Inc.Reduced size user interface
US20160093105A1 (en)2014-09-302016-03-31Sony Computer Entertainment Inc.Display of text information on a head-mounted display
US9342160B2 (en)2013-07-312016-05-17Microsoft Technology Licensing, LlcErgonomic physical interaction zone cursor mapping
US9389779B2 (en)2013-03-142016-07-12Intel CorporationDepth-based user interface gesture control
US20170102791A1 (en)2015-10-092017-04-13Zspace, Inc.Virtual Plane in a Stylus Based Stereoscopic Display System
US10281987B1 (en)2013-08-092019-05-07Leap Motion, Inc.Systems and methods of free-space gestural interaction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9361732B2 (en)2014-05-012016-06-07Microsoft Technology Licensing, LlcTransitions between body-locked and world-locked augmented reality

Patent Citations (523)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US2665041A (en)1952-01-091954-01-05Daniel J MaffucciCombination form for washing woolen socks
US3064704A (en)1961-01-261962-11-20Hutchinson Cie EtsPneumatic assembly for a vehicle wheel
US4175862A (en)1975-08-271979-11-27Solid Photography Inc.Arrangement for sensing the geometric characteristics of an object
US4879659A (en)1987-11-241989-11-07Bowlin William PLog processing systems
US4876455A (en)1988-02-251989-10-24Westinghouse Electric Corp.Fiber optic solder joint inspection system
US4893223A (en)1989-01-101990-01-09Northern Telecom LimitedIllumination devices for inspection systems
US5038258A (en)1989-03-021991-08-06Carl-Zeiss-StiftungIlluminating arrangement for illuminating an object with incident light
JPH02236407A (en)1989-03-101990-09-19Agency Of Ind Science & TechnolMethod and device for measuring shape of object
US5134661A (en)1991-03-041992-07-28Reinsch Roger AMethod of capture and analysis of digitized image data
US5282067A (en)1991-10-071994-01-25California Institute Of TechnologySelf-amplified optical pattern recognition system
DE4201934A1 (en)1992-01-241993-07-29Siemens Ag GESTIC COMPUTER
US6184326B1 (en)1992-03-202001-02-06Fina Technology, Inc.Syndiotactic polypropylene
US5581276A (en)1992-09-081996-12-03Kabushiki Kaisha Toshiba3D human interface apparatus using motion recognition based on dynamic image processing
US5434617A (en)1993-01-291995-07-18Bell Communications Research, Inc.Automatic tracking camera control system
WO1994026057A1 (en)1993-04-291994-11-10Scientific Generics LimitedBackground separation for still and moving images
US5454043A (en)1993-07-301995-09-26Mitsubishi Electric Research Laboratories, Inc.Dynamic and static hand gesture recognition through low-level image analysis
US5691737A (en)1993-09-211997-11-25Sony CorporationSystem for explaining an exhibit using spectacle-type displays
US5659475A (en)1994-03-171997-08-19Brown; Daniel M.Electronic air traffic control system for use in airport towers
US5594469A (en)1995-02-211997-01-14Mitsubishi Electric Information Technology Center America Inc.Hand gesture machine control system
US5900863A (en)1995-03-161999-05-04Kabushiki Kaisha ToshibaMethod and apparatus for controlling computer without touching input device
JPH08261721A (en)1995-03-221996-10-11Teijin LtdDeterioration detecting method for image processing illuminating means
US20050131607A1 (en)1995-06-072005-06-16Automotive Technologies International Inc.Method and arrangement for obtaining information about vehicle occupants
US5940538A (en)1995-08-041999-08-17Spiegel; EhudApparatus and methods for object border tracking
US5574511A (en)1995-10-181996-11-12Polaroid CorporationBackground replacement for an image
US5742263A (en)1995-12-181998-04-21Telxon CorporationHead tracking system for a head mounted display system
JPH09259278A (en)1996-03-251997-10-03Matsushita Electric Ind Co Ltd Image processing device
US6002808A (en)1996-07-261999-12-14Mitsubishi Electric Information Technology Center America, Inc.Hand gesture control system
US6184926B1 (en)1996-11-262001-02-06Ncr CorporationSystem and method for detecting a human face in uncontrolled environments
US6031661A (en)1997-01-232000-02-29Yokogawa Electric CorporationConfocal microscopic equipment
US6492986B1 (en)1997-06-022002-12-10The Trustees Of The University Of PennsylvaniaMethod for human face shape and motion estimation based on integrating optical flow and deformable models
US6075895A (en)1997-06-202000-06-13HoloplexMethods and apparatus for gesture recognition based on templates
US6252598B1 (en)1997-07-032001-06-26Lucent Technologies Inc.Video hand image computer interface
US8111239B2 (en)1997-08-222012-02-07Motion Games, LlcMan machine interfaces and applications
US6263091B1 (en)1997-08-222001-07-17International Business Machines CorporationSystem and method for identifying foreground and background portions of digitized images
US6072494A (en)1997-10-152000-06-06Electric Planet, Inc.Method and apparatus for real-time gesture recognition
US6181343B1 (en)1997-12-232001-01-30Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6195104B1 (en)1997-12-232001-02-27Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
JP2009037594A (en)1997-12-232009-02-19Koninkl Philips Electronics NvSystem and method for constructing three-dimensional image using camera-based gesture input
US6031161A (en)1998-02-042000-02-29Dekalb Genetics CorporationInbred corn plant GM9215 and seeds thereof
US6154558A (en)1998-04-222000-11-28Hsieh; Kuan-HongIntention identification method
JP2000023038A (en)1998-06-302000-01-21Toshiba Corp Image extraction device
US6493041B1 (en)1998-06-302002-12-10Sun Microsystems, Inc.Method and apparatus for the detection of motion in video
US20060210112A1 (en)1998-08-102006-09-21Cohen Charles JBehavior recognition system
US20090274339A9 (en)1998-08-102009-11-05Cohen Charles JBehavior recognition system
US6950534B2 (en)1998-08-102005-09-27Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US6603867B1 (en)1998-09-082003-08-05Fuji Xerox Co., Ltd.Three-dimensional object identifying system
US6629065B1 (en)1998-09-302003-09-30Wisconsin Alumni Research FoundationMethods and apparata for rapid computer-aided design of objects in virtual reality and other environments
US6498628B2 (en)1998-10-132002-12-24Sony CorporationMotion sensing interface
EP0999542A1 (en)1998-11-022000-05-10Ncr International Inc.Methods of and apparatus for hands-free operation of a voice recognition system
US7483049B2 (en)1998-11-202009-01-27Aman James AOptimizations for live event, real-time, 3D object tracking
US6661918B1 (en)1998-12-042003-12-09Interval Research CorporationBackground estimation and segmentation based on range and color
US6147678A (en)1998-12-092000-11-14Lucent Technologies Inc.Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6204852B1 (en)1998-12-092001-03-20Lucent Technologies Inc.Video hand image three-dimensional computer interface
US6578203B1 (en)1999-03-082003-06-10Tazwell L. Anderson, Jr.Audio/video signal distribution system for head mounted displays
US6993157B1 (en)1999-05-182006-01-31Sanyo Electric Co., Ltd.Dynamic image processing method and device and medium
US6804656B1 (en)1999-06-232004-10-12Visicu, Inc.System and method for providing continuous, expert network critical care services from a remote location(s)
US6346933B1 (en)1999-09-212002-02-12Seiko Epson CorporationInteractive display presentation system
US6734911B1 (en)1999-09-302004-05-11Koninklijke Philips Electronics N.V.Tracking camera using a lens that generates both wide-angle and narrow-angle views
US20010044858A1 (en)1999-12-212001-11-22Junichi RekimotoInformation input/output system and information input/output method
US6738424B1 (en)1999-12-272004-05-18Objectvideo, Inc.Scene model generation from video for use in video processing
US6771294B1 (en)1999-12-292004-08-03Petri PulliUser interface
US6819796B2 (en)2000-01-062004-11-16Sharp Kabushiki KaishaMethod of and apparatus for segmenting a pixellated image
US6674877B1 (en)2000-02-032004-01-06Microsoft CorporationSystem and method for visually tracking occluded objects in real time
US20020008211A1 (en)2000-02-102002-01-24Peet KaskFluorescence intensity multiple distributions analysis: concurrent determination of diffusion times and molecular brightness
US20020021287A1 (en)2000-02-112002-02-21Canesta, Inc.Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device
US6463402B1 (en)2000-03-062002-10-08Ralph W. BennettInfeed log scanning for lumber optimization
US20020008139A1 (en)2000-04-212002-01-24Albertelli Lawrence E.Wide-field extended-depth doubly telecentric catadioptric optical system for digital imaging
US6417970B1 (en)2000-06-082002-07-09Interactive Imaging SystemsTwo stage optical system for head mounted display
US20010052985A1 (en)2000-06-122001-12-20Shuji OnoImage capturing apparatus and distance measuring method
US7692625B2 (en)2000-07-052010-04-06Smart Technologies UlcCamera-based touch system
US20020041327A1 (en)2000-07-242002-04-11Evan HildrethVideo-based image control system
US7152024B2 (en)2000-08-302006-12-19Microsoft CorporationFacial image processing methods and systems
US6901170B1 (en)2000-09-052005-05-31Fuji Xerox Co., Ltd.Image processing device and recording medium
US20020105484A1 (en)2000-09-252002-08-08Nassir NavabSystem and method for calibrating a monocular optical see-through head-mounted display system for augmented reality
JP2002133400A (en)2000-10-242002-05-10Oki Electric Ind Co LtdObject extraction image processor
US6798628B1 (en)2000-11-172004-09-28Pass & Seymour, Inc.Arc fault circuit detector having two arc fault detection levels
US20080110994A1 (en)2000-11-242008-05-15Knowles C HMethod of illuminating objects during digital image capture operations by mixing visible and invisible spectral illumination energy at point of sale (POS) environments
US20020080094A1 (en)2000-12-222002-06-27Frank BioccaTeleportal face-to-face system
US20060034545A1 (en)2001-03-082006-02-16Universite Joseph FourierQuantitative analysis, visualization and movement correction in dynamic processes
US7542586B2 (en)2001-03-132009-06-02Johnson Raymond CTouchless identification system for monitoring hand washing or application of a disinfectant
US6814656B2 (en)2001-03-202004-11-09Luis J. RodriguezSurface treatment disks for rotary tools
US20040145809A1 (en)2001-03-202004-07-29Karl-Heinz BrennerElement for the combined symmetrization and homogenization of a bundle of beams
US20050007673A1 (en)2001-05-232005-01-13Chaoulov Vesselin I.Compact microlenslet arrays imager
US6919880B2 (en)2001-06-012005-07-19Smart Technologies Inc.Calibrating camera offsets to facilitate object position determination using triangulation
US20030123703A1 (en)2001-06-292003-07-03Honeywell International Inc.Method for monitoring a moving object and system regarding same
US20030053658A1 (en)2001-06-292003-03-20Honeywell International Inc.Surveillance system and methods regarding same
US20030053659A1 (en)2001-06-292003-03-20Honeywell International Inc.Moving object assessment system and method
US20040125228A1 (en)2001-07-252004-07-01Robert DoughertyApparatus and method for determining the range of remote objects
US20030081141A1 (en)2001-09-172003-05-01Douglas MazzapicaBrightness adjustment method
US7213707B2 (en)2001-12-112007-05-08Walgreen Co.Product shipping and display carton
US6804654B2 (en)2002-02-112004-10-12Telemanager Technologies, Inc.System and method for providing prescription services using voice recognition
US7215828B2 (en)2002-02-132007-05-08Eastman Kodak CompanyMethod and system for determining image orientation
US20030152289A1 (en)2002-02-132003-08-14Eastman Kodak CompanyMethod and system for determining image orientation
US7340077B2 (en)2002-02-152008-03-04Canesta, Inc.Gesture recognition system using depth perceptive sensors
JP2003256814A (en)2002-02-272003-09-12Olympus Optical Co LtdSubstrate checking device
US7831932B2 (en)2002-03-082010-11-09Revelations in Design, Inc.Electric device control apparatus and methods for making and using same
US7861188B2 (en)2002-03-082010-12-28Revelation And Design, IncElectric device control apparatus and methods for making and using same
US6702494B2 (en)2002-03-272004-03-09Geka Brush GmbhCosmetic unit
US20030202697A1 (en)2002-04-252003-10-30Simard Patrice Y.Segmented layered image system
US8035624B2 (en)2002-05-282011-10-11Intellectual Ventures Holding 67 LlcComputer vision based touch screen
US20090122146A1 (en)2002-07-272009-05-14Sony Computer Entertainment Inc.Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US20100303298A1 (en)2002-07-272010-12-02Sony Computer Entertainment Inc.Selective sound source listening in conjunction with computer interactive processing
US8553037B2 (en)2002-08-142013-10-08Shawn SmithDo-It-Yourself photo realistic talking head creation system and method
US20040103111A1 (en)2002-11-252004-05-27Eastman Kodak CompanyMethod and computer program product for determining an area of importance in an image using eye monitoring information
US20040125984A1 (en)2002-12-192004-07-01Wataru ItoObject tracking method and object tracking apparatus
US20040155877A1 (en)2003-02-122004-08-12Canon Europa N.V.Image processing apparatus
JP2004246252A (en)2003-02-172004-09-02Takenaka Komuten Co LtdApparatus and method for collecting image information
US7257237B1 (en)2003-03-072007-08-14Sandia CorporationReal time markerless motion tracking using linked kinematic chains
US7532206B2 (en)2003-03-112009-05-12Smart Technologies UlcSystem and method for differentiating between pointers used to contact touch surface
US20040212725A1 (en)2003-03-192004-10-28Ramesh RaskarStylized rendering using a multi-flash camera
US7665041B2 (en)2003-03-252010-02-16Microsoft CorporationArchitecture for controlling a computer using hand gestures
EP1477924A2 (en)2003-03-312004-11-17HONDA MOTOR CO., Ltd.Gesture recognition apparatus, method and program
US20060072105A1 (en)2003-05-192006-04-06Micro-Epsilon Messtechnik Gmbh & Co. KgMethod and apparatus for optically controlling the quality of objects having a circular edge
US20120038637A1 (en)2003-05-292012-02-16Sony Computer Entertainment Inc.User-driven three-dimensional interactive gaming environment
DE10326035A1 (en)2003-06-102005-01-13Hema Elektronik-Fertigungs- Und Vertriebs Gmbh Method for adaptive error detection on a structured surface
WO2004114220A1 (en)2003-06-172004-12-29Brown UniversityMethod and apparatus for model-based detection of structure in projection data
US7244233B2 (en)2003-07-292007-07-17Ntd Laboratories, Inc.System and method for utilizing shape analysis to assess fetal abnormality
US20050068518A1 (en)2003-08-292005-03-31Baney Douglas M.Position determination that is responsive to a retro-reflective object
US7646372B2 (en)2003-09-152010-01-12Sony Computer Entertainment Inc.Methods and systems for enabling direction detection when interfacing with a computer program
US7536032B2 (en)2003-10-242009-05-19Reactrix Systems, Inc.Method and system for processing captured image information in an interactive video display system
US20050094019A1 (en)2003-10-312005-05-05Grosvenor David A.Camera control
US7961934B2 (en)2003-12-112011-06-14Strider Labs, Inc.Probable reconstruction of surfaces in occluded regions by computed symmetry
US20060028656A1 (en)2003-12-182006-02-09Shalini VenkateshMethod and apparatus for determining surface displacement based on an image of a retroreflector attached to the surface
US8085339B2 (en)2004-01-162011-12-27Sony Computer Entertainment Inc.Method and apparatus for optimizing capture device settings through depth information
US20050156888A1 (en)2004-01-162005-07-21Tong XiePosition determination and motion tracking
US20100001998A1 (en)2004-01-302010-01-07Electronic Scripting Products, Inc.Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features
US20050168578A1 (en)2004-02-042005-08-04William GobushOne camera stereo system
US8872914B2 (en)2004-02-042014-10-28Acushnet CompanyOne camera stereo system
US20060029296A1 (en)2004-02-152006-02-09King Martin TData capture from rendered documents using handheld device
US20060050979A1 (en)2004-02-182006-03-09Isao KawaharaMethod and device of image correction
US7656372B2 (en)2004-02-252010-02-02Nec CorporationMethod for driving liquid crystal display device having a display pixel region and a dummy pixel region
US7259873B2 (en)2004-03-252007-08-21Sikora, AgMethod for measuring the dimension of a non-circular cross-section of an elongated article in particular of a flat cable or a sector cable
US20060098899A1 (en)2004-04-012006-05-11King Martin THandheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US20050238201A1 (en)2004-04-152005-10-27Atid ShamaieTracking bimanual movements
US20050236558A1 (en)2004-04-222005-10-27Nobuo NabeshimaDisplacement detection apparatus
JP2011065652A (en)2004-05-142011-03-31Honda Motor Co LtdSign based man-machine interaction
US7308112B2 (en)2004-05-142007-12-11Honda Motor Co., Ltd.Sign based human-machine interaction
US7519223B2 (en)2004-06-282009-04-14Microsoft CorporationRecognizing gestures and using gestures for interacting with software applications
US20080273764A1 (en)2004-06-292008-11-06Koninklijke Philips Electronics, N.V.Personal Gesture Signature
JP2006019526A (en)2004-07-012006-01-19Ibiden Co LtdOptical element, package substrate, and device for optical communication
US20090102840A1 (en)2004-07-152009-04-23You Fu LiSystem and method for 3d measurement and surface reconstruction
US8213707B2 (en)2004-07-152012-07-03City University Of Hong KongSystem and method for 3D measurement and surface reconstruction
US20060017807A1 (en)2004-07-262006-01-26Silicon Optix, Inc.Panoramic vision system and method
WO2006020846A2 (en)2004-08-112006-02-23THE GOVERNMENT OF THE UNITED STATES OF AMERICA as represented by THE SECRETARY OF THE NAVY Naval Research LaboratorySimulated locomotion method and apparatus
US7606417B2 (en)2004-08-162009-10-20Fotonation Vision LimitedForeground/background segmentation in digital images with differential exposure calculations
US20080126937A1 (en)2004-10-052008-05-29Sony France S.A.Content-Management Interface
US20070086621A1 (en)2004-10-132007-04-19Manoj AggarwalFlexible layer tracking with weak online appearance model
GB2419433A (en)2004-10-202006-04-26Glasgow School Of ArtAutomated Gesture Recognition
US20070042346A1 (en)2004-11-242007-02-22Battelle Memorial InstituteMethod and apparatus for detection of rare cells
US8218858B2 (en)2005-01-072012-07-10Qualcomm IncorporatedEnhanced object reconstruction
US7598942B2 (en)2005-02-082009-10-06Oblong Industries, Inc.System and method for gesture based control system
US20080246759A1 (en)2005-02-232008-10-09Craig SummersAutomatic Scene Modeling for the 3D Camera and 3D Video
US20060204040A1 (en)2005-03-072006-09-14Freeman William TOccluding contour detection and storage for digital photography
JP2006259829A (en)2005-03-152006-09-28Omron CorpImage processing system, image processor and processing method, recording medium, and program
US8185176B2 (en)2005-04-262012-05-22Novadaq Technologies, Inc.Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090203994A1 (en)2005-04-262009-08-13Novadaq Technologies Inc.Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090203993A1 (en)2005-04-262009-08-13Novadaq Technologies Inc.Real time imagining during solid organ transplant
US20090309710A1 (en)2005-04-282009-12-17Aisin Seiki Kabushiki KaishaVehicle Vicinity Monitoring System
US20060262421A1 (en)2005-05-192006-11-23Konica Minolta Photo Imaging, Inc.Optical unit and image capturing apparatus including the same
US20060290950A1 (en)2005-06-232006-12-28Microsoft CorporationImage superresolution through edge extraction and contrast enhancement
US20070014466A1 (en)2005-07-082007-01-18Leo BaldwinAchieving convergent light rays emitted by planar array of light sources
US20080019576A1 (en)2005-09-162008-01-24Blake SenftnerPersonalizing a Video
US20080319356A1 (en)2005-09-222008-12-25Cain Charles APulsed cavitational ultrasound therapy
US7948493B2 (en)2005-09-302011-05-24Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and computer program for determining information about shape and/or location of an ellipse in a graphical image
US20080106746A1 (en)2005-10-112008-05-08Alexander ShpuntDepth-varying light fields for three dimensional sensing
US20130120319A1 (en)2005-10-312013-05-16Extreme Reality Ltd.Methods, Systems, Apparatuses, Circuits and Associated Computer Executable Code for Detecting Motion, Position and/or Orientation of Objects Within a Defined Spatial Region
US7940885B2 (en)2005-11-092011-05-10Dexela LimitedMethods and apparatus for obtaining low-dose imaging
US20070130547A1 (en)2005-12-012007-06-07Navisense, LlcMethod and system for touchless user interface control
CN1984236A (en)2005-12-142007-06-20浙江工业大学Method for collecting characteristics in telecommunication flow information video detection
US20070238956A1 (en)2005-12-222007-10-11Gabriel HarasImaging device and method for operating an imaging device
US20100066676A1 (en)2006-02-082010-03-18Oblong Industries, Inc.Gestural Control of Autonomous and Semi-Autonomous Systems
US20070206719A1 (en)2006-03-022007-09-06General Electric CompanySystems and methods for improving a resolution of an image
US20070211023A1 (en)2006-03-132007-09-13Navisense. LlcVirtual user interface method and system thereof
EP1837665A2 (en)2006-03-202007-09-26Tektronix, Inc.Waveform compression and display
US20070230929A1 (en)2006-03-312007-10-04Denso CorporationObject-detecting device and method of extracting operation object
DE102007015495A1 (en)2006-03-312007-10-04Denso Corp., KariyaControl object e.g. driver`s finger, detection device for e.g. vehicle navigation system, has illumination section to illuminate one surface of object, and controller to control illumination of illumination and image recording sections
DE102007015497B4 (en)2006-03-312014-01-23Denso Corporation Speech recognition device and speech recognition program
JP2007272596A (en)2006-03-312007-10-18Denso CorpOperation object extracting device for mobile body
WO2007137093A2 (en)2006-05-162007-11-29MadentecSystems and methods for a hands free mouse
US20080056752A1 (en)2006-05-222008-03-06Denton Gary AMultipath Toner Patch Sensor for Use in an Image Forming Device
US8086971B2 (en)2006-06-282011-12-27Nokia CorporationApparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20080031492A1 (en)2006-07-102008-02-07Fondazione Bruno KesslerMethod and apparatus for tracking a number of objects or object parts in image sequences
US20080244468A1 (en)2006-07-132008-10-02Nishihara H KeithGesture Recognition Interface System with Vertical Display
US20080013826A1 (en)2006-07-132008-01-17Northrop Grumman CorporationGesture recognition interface system
US20090103780A1 (en)2006-07-132009-04-23Nishihara H KeithHand-Gesture Recognition Method
US20080030429A1 (en)2006-08-072008-02-07International Business Machines CorporationSystem and method of enhanced virtual reality
US20080064954A1 (en)2006-08-242008-03-13Baylor College Of MedicineMethod of measuring propulsion in lymphatic structures
US8064704B2 (en)2006-10-112011-11-22Samsung Electronics Co., Ltd.Hand gesture recognition input system and method for a mobile phone
US20110025818A1 (en)2006-11-072011-02-03Jonathan GallmeierSystem and Method for Controlling Presentations and Videoconferences Using Hand Motions
US20080106637A1 (en)2006-11-072008-05-08Fujifilm CorporationPhotographing apparatus and photographing method
US20080111710A1 (en)2006-11-092008-05-15Marc BoillotMethod and Device to Control Touchless Recognition
US20080118091A1 (en)2006-11-162008-05-22Motorola, Inc.Alerting system for a communication device
US8768022B2 (en)2006-11-162014-07-01Vanderbilt UniversityApparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same
US20100141762A1 (en)2006-11-202010-06-10Jon SiannWireless Network Camera Systems
US20100066975A1 (en)2006-11-292010-03-18Bengt RehnstromEye tracking illumination
US7840031B2 (en)2007-01-122010-11-23International Business Machines CorporationTracking a range of body movement based on 3D captured image streams of a user
US7971156B2 (en)2007-01-122011-06-28International Business Machines CorporationControlling resource access based on user gesturing in a 3D captured image stream of the user
TW200844871A (en)2007-01-122008-11-16IbmControlling resource access based on user gesturing in a 3D captured image stream of the user
US20100020078A1 (en)2007-01-212010-01-28Prime Sense LtdDepth mapping using multi-beam illumination
US20080187175A1 (en)2007-02-072008-08-07Samsung Electronics Co., Ltd.Method and apparatus for tracking object, and method and apparatus for calculating object pose information
US20150131859A1 (en)2007-02-072015-05-14Samsung Electronics Co., Ltd.Method and apparatus for tracking object, and method and apparatus for calculating object pose information
US8471848B2 (en)2007-03-022013-06-25Organic Motion, Inc.System and method for tracking three dimensional objects
JP2008227569A (en)2007-03-082008-09-25Seiko Epson Corp Imaging apparatus, electronic device, imaging control method, and imaging control program
US8363010B2 (en)2007-03-232013-01-29Denso CorporationOperating input device for reducing input error
US8023698B2 (en)2007-03-302011-09-20Denso CorporationInformation device operation apparatus
US20100118123A1 (en)2007-04-022010-05-13Prime Sense LtdDepth mapping using projected patterns
US20100201880A1 (en)2007-04-132010-08-12Pioneer CorporationShot size identifying apparatus and method, electronic apparatus, and computer program
US8045825B2 (en)2007-04-252011-10-25Canon Kabushiki KaishaImage processing apparatus and method for composition of real space images and virtual space images
US20080291160A1 (en)2007-05-092008-11-27Nintendo Co., Ltd.System and method for recognizing multi-axis gestures based on handheld controller accelerometer outputs
US20080278589A1 (en)2007-05-112008-11-13Karl Ola ThornMethods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
US8229134B2 (en)2007-05-242012-07-24University Of MarylandAudio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
US20080304740A1 (en)2007-06-062008-12-11Microsoft CorporationSalient Object Detection
US20090002489A1 (en)2007-06-292009-01-01Fuji Xerox Co., Ltd.Efficient tracking multiple objects through occlusion
JP2009031939A (en)2007-07-252009-02-12Advanced Telecommunication Research Institute International Image processing apparatus, image processing method, and image processing program
US8432377B2 (en)2007-08-302013-04-30Next Holdings LimitedOptical touchscreen with improved illumination
US20090153655A1 (en)2007-09-252009-06-18Tsukasa IkeGesture recognition apparatus and method thereof
US8144233B2 (en)2007-10-032012-03-27Sony CorporationDisplay control device, display control method, and display control program for superimposing images to create a composite image
US20090093307A1 (en)2007-10-082009-04-09Sony Computer Entertainment America Inc.Enhanced game controller
US20090116742A1 (en)2007-11-012009-05-07H Keith NishiharaCalibration of a Gesture Recognition Interface System
US20100264833A1 (en)2007-11-082010-10-21Tony Petrus Van EndertContinuous control of led light beam position and focus based on selection and intensity control
US20090128564A1 (en)2007-11-152009-05-21Canon Kabushiki KaishaImage processing apparatus and image processing method
US20110116684A1 (en)2007-12-212011-05-19Coffman Thayne RSystem and method for visually tracking with occlusions
US8319832B2 (en)2008-01-312012-11-27Denso CorporationInput apparatus and imaging apparatus
US8270669B2 (en)2008-02-062012-09-18Denso CorporationApparatus for extracting operating object and apparatus for projecting operating hand
US20090217211A1 (en)2008-02-272009-08-27Gesturetek, Inc.Enhanced input using recognized gestures
US7980885B2 (en)2008-03-032011-07-19Amad Mennekes Holding Gmbh & Co. KgPlug assembly with strain relief
US20090257623A1 (en)2008-04-152009-10-15Cyberlink CorporationGenerating effects in a webcam application
US20110043806A1 (en)2008-04-172011-02-24Avishay GuettaIntrusion warning system
US20110115486A1 (en)2008-04-182011-05-19Universitat ZurichTravelling-wave nuclear magnetic resonance method
US8249345B2 (en)2008-06-272012-08-21Mako Surgical Corp.Automatic image segmentation using contour propagation
WO2010007662A1 (en)2008-07-152010-01-21イチカワ株式会社Heat-resistant cushion material for forming press
US20100013832A1 (en)2008-07-162010-01-21Jing XiaoModel-Based Object Image Processing
US20100013662A1 (en)2008-07-172010-01-21Michael StudeProduct locating system
US20100023015A1 (en)2008-07-232010-01-28Otismed CorporationSystem and method for manufacturing arthroplasty jigs having improved mating accuracy
US9014414B2 (en)2008-07-292015-04-21Canon Kabushiki KaishaInformation processing apparatus and information processing method for processing image information at an arbitrary viewpoint in a physical space or virtual space
US20100027845A1 (en)2008-07-312010-02-04Samsung Electronics Co., Ltd.System and method for motion detection based on object trajectory
US20100026963A1 (en)2008-08-012010-02-04Andreas FaulstichOptical projection grid, scanning camera comprising an optical projection grid and method for generating an optical projection grid
US20100046842A1 (en)2008-08-192010-02-25Conwell William YMethods and Systems for Content Processing
US20100058252A1 (en)2008-08-282010-03-04Acer IncorporatedGesture guide system and a method for controlling a computer system by a gesture
US20100053209A1 (en)2008-08-292010-03-04Siemens Medical Solutions Usa, Inc.System for Processing Medical Image data to Provide Vascular Function Information
US20100053164A1 (en)2008-09-022010-03-04Samsung Electronics Co., LtdSpatially correlated rendering of three-dimensional content on display components having arbitrary positions
US20110176146A1 (en)2008-09-022011-07-21Cristina Alvarez DiezDevice and method for measuring a surface
US20100053612A1 (en)2008-09-032010-03-04National Central UniversityMethod and apparatus for scanning hyper-spectral image
JP2010060548A (en)2008-09-032010-03-18National Central UnivHyper-spectral scanning device and method for the same
US20100066737A1 (en)2008-09-162010-03-18Yuyu LiuDynamic-state estimating apparatus, dynamic-state estimating method, and program
WO2010032268A2 (en)2008-09-192010-03-25Avinash SaxenaSystem and method for controlling graphical objects
US20100091110A1 (en)2008-10-102010-04-15Gesturetek, Inc.Single camera tracker
US20100095206A1 (en)2008-10-132010-04-15Lg Electronics Inc.Method for providing a user interface using three-dimensional gestures and an apparatus using the same
CN101729808A (en)2008-10-142010-06-09Tcl集团股份有限公司Remote control method for television and system for remotely controlling television by same
US20110261178A1 (en)2008-10-152011-10-27The Regents Of The University Of CaliforniaCamera system with autonomous miniature camera and light source assembly and method for image enhancement
CN201332447Y (en)2008-10-222009-10-21康佳集团股份有限公司Television for controlling or operating game through gesture change
US8744122B2 (en)2008-10-222014-06-03Sri InternationalSystem and method for object detection from a moving platform
US20110234840A1 (en)2008-10-232011-09-29Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.Apparatus, method and computer program for recognizing a gesture in a picture, and apparatus, method and computer program for controlling a device
US20100121189A1 (en)2008-11-122010-05-13Sonosite, Inc.Systems and methods for image presentation for medical examination and interventional procedures
US20100125815A1 (en)2008-11-192010-05-20Ming-Jen WangGesture-based control method for interactive screen control
US20100127995A1 (en)2008-11-262010-05-27Panasonic CorporationSystem and method for differentiating between intended and unintended user input on a touchpad
US8907982B2 (en)2008-12-032014-12-09Alcatel LucentMobile device for augmented reality applications
US8289162B2 (en)2008-12-222012-10-16Wimm Labs, Inc.Gesture-based user interface for a wearable portable device
US20100158372A1 (en)2008-12-222010-06-24Electronics And Telecommunications Research InstituteApparatus and method for separating foreground and background
US20100162165A1 (en)2008-12-222010-06-24Apple Inc.User Interface Tools
WO2010076622A1 (en)2008-12-302010-07-08Nokia CorporationMethod, apparatus and computer program product for providing hand segmentation for gesture analysis
US8290208B2 (en)2009-01-122012-10-16Eastman Kodak CompanyEnhanced safety during laser projection
US20100177929A1 (en)2009-01-122010-07-15Kurtz Andrew FEnhanced safety during laser projection
US20120204133A1 (en)2009-01-132012-08-09Primesense Ltd.Gesture-Based User Interface
US20110279397A1 (en)2009-01-262011-11-17Zrro Technologies (2009) Ltd.Device and method for monitoring the object's behavior
US20100199230A1 (en)2009-01-302010-08-05Microsoft CorporationGesture recognizer system architicture
US20100199221A1 (en)2009-01-302010-08-05Microsoft CorporationNavigation of a virtual plane using depth
WO2010088035A2 (en)2009-01-302010-08-05Microsoft CorporationGesture recognizer system architecture
US20120050157A1 (en)2009-01-302012-03-01Microsoft CorporationGesture recognizer system architecture
US8395600B2 (en)2009-01-302013-03-12Denso CorporationUser interface device
US20110291925A1 (en)2009-02-022011-12-01Eyesight Mobile Technologies Ltd.System and method for object recognition and tracking in a video stream
US20100194863A1 (en)2009-02-022010-08-05Ydreams - Informatica, S.A.Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images
US20100199232A1 (en)2009-02-032010-08-05Massachusetts Institute Of TechnologyWearable Gestural Interface
US20100222102A1 (en)2009-02-052010-09-02Rodriguez Tony FSecond Screens and Widgets
US8304727B2 (en)2009-02-062012-11-06Siliconfile Technologies Inc.Image sensor capable of judging proximity to subject
US20140081521A1 (en)2009-02-152014-03-20Neonode Inc.Light-based touch controls on a steering wheel and dashboard
US20100208942A1 (en)2009-02-192010-08-19Sony CorporationImage processing device and method
US8244233B2 (en)2009-02-232012-08-14Augusta Technology, Inc.Systems and methods for operating a virtual whiteboard using a mobile phone device
US20100219934A1 (en)2009-02-272010-09-02Seiko Epson CorporationSystem of controlling device in response to gesture
US20100248836A1 (en)2009-03-302010-09-30Nintendo Co., Ltd.Computer readable storage medium having game program stored thereon and game apparatus
US20150054729A1 (en)2009-04-022015-02-26David MINNENRemote devices used in a markerless installation of a spatial operating environment incorporating gestural control
US20100275159A1 (en)2009-04-232010-10-28Takashi MatsubaraInput device
US8593417B2 (en)2009-04-302013-11-26Denso CorporationOperation apparatus for in-vehicle electronic device and method for controlling the same
US20100277411A1 (en)2009-05-012010-11-04Microsoft CorporationUser tracking feedback
US8605202B2 (en)2009-05-122013-12-10Koninklijke Philips N.V.Motion of image sensor, lens and/or focal length to reduce motion blur
JP2012527145A (en)2009-05-122012-11-01コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Camera, system having camera, method of operating camera, and method of deconvolving recorded image
US20120065499A1 (en)2009-05-202012-03-15Hitachi Medical CorporationMedical image diagnosis device and region-of-interest setting method therefore
US20100296698A1 (en)2009-05-252010-11-25Visionatics Inc.Motion object detection method using adaptive background model and computer-readable storage medium
US8112719B2 (en)2009-05-262012-02-07Topseed Technology Corp.Method for controlling gesture-based remote control system
US20100302357A1 (en)2009-05-262010-12-02Che-Hao HsuGesture-based remote control system
WO2010138741A1 (en)2009-05-272010-12-02Analog Devices, Inc.Position measurement systems using position sensitive detectors
JP2011010258A (en)2009-05-272011-01-13Seiko Epson CorpImage processing apparatus, image display system, and image extraction device
US20100302015A1 (en)2009-05-292010-12-02Microsoft CorporationSystems and methods for immersive interaction with virtual objects
US20110296353A1 (en)2009-05-292011-12-01Canesta, Inc.Method and system implementing user-centric gesture control
US20100306712A1 (en)2009-05-292010-12-02Microsoft CorporationGesture Coach
US20100309097A1 (en)2009-06-042010-12-09Roni RavivHead mounted 3d display
WO2010148155A2 (en)2009-06-162010-12-23Microsoft CorporationSurface computer user interaction
US20100321377A1 (en)2009-06-232010-12-23Disney Enterprises, Inc. (Burbank, Ca)System and method for integrating multiple virtual rendering systems to provide an augmented reality
CN101930610A (en)2009-06-262010-12-29思创影像科技股份有限公司 Moving Object Detection Method Using Adaptive Background Model
US20110007072A1 (en)2009-07-092011-01-13University Of Central Florida Research Foundation, Inc.Systems and methods for three-dimensionally modeling moving objects
JP2011107681A (en)2009-07-172011-06-02Nikon CorpFocusing device and camera
US20120113316A1 (en)2009-07-172012-05-10Nikon CorporationFocusing device and camera
US20110026765A1 (en)2009-07-312011-02-03Echostar Technologies L.L.C.Systems and methods for hand gesture control of an electronic device
WO2011024193A2 (en)2009-08-202011-03-03Natarajan KannanElectronically variable field of view (fov) infrared illuminator
US20110057875A1 (en)2009-09-042011-03-10Sony CorporationDisplay control apparatus, display control method, and display control program
US20110066984A1 (en)*2009-09-162011-03-17Google Inc.Gesture Recognition on Computing Device
US20110291988A1 (en)2009-09-222011-12-01Canesta, Inc.Method and system for recognition of user gesture interaction with passive surface video displays
WO2011036618A2 (en)2009-09-222011-03-31Pebblestech Ltd.Remote control of computer devices
US20110080470A1 (en)2009-10-022011-04-07Kabushiki Kaisha ToshibaVideo reproduction apparatus and video reproduction method
US20110080337A1 (en)2009-10-052011-04-07Hitachi Consumer Electronics Co., Ltd.Image display device and display control method thereof
US20110080490A1 (en)2009-10-072011-04-07Gesturetek, Inc.Proximity object tracker
US20120218263A1 (en)2009-10-122012-08-30Metaio GmbhMethod for representing virtual information in a view of a real environment
WO2011044680A1 (en)2009-10-132011-04-21Recon Instruments Inc.Control systems and methods for head-mounted information systems
WO2011045789A1 (en)2009-10-132011-04-21Pointgrab Ltd.Computer vision gesture based control of a device
US20110093820A1 (en)2009-10-192011-04-21Microsoft CorporationGesture personalization and profile roaming
US20110107216A1 (en)2009-11-032011-05-05Qualcomm IncorporatedGesture-based user interface
US20110119640A1 (en)2009-11-192011-05-19Microsoft CorporationDistance scalable no touch computing
US8843857B2 (en)2009-11-192014-09-23Microsoft CorporationDistance scalable no touch computing
KR101092909B1 (en)2009-11-272011-12-12(주)디스트릭트홀딩스Gesture Interactive Hologram Display Appatus and Method
US20110205151A1 (en)2009-12-042011-08-25John David NewtonMethods and Systems for Position Detection
US20110134112A1 (en)2009-12-082011-06-09Electronics And Telecommunications Research InstituteMobile terminal having gesture recognition function and interface system using the same
US20120236288A1 (en)2009-12-082012-09-20Qinetiq LimitedRange Based Sensing
JP4906960B2 (en)2009-12-172012-03-28株式会社エヌ・ティ・ティ・ドコモ Method and apparatus for interaction between portable device and screen
US8659594B2 (en)2009-12-182014-02-25Electronics And Telecommunications Research InstituteMethod and apparatus for capturing motion of dynamic object
US20110148875A1 (en)2009-12-182011-06-23Electronics And Telecommunications Research InstituteMethod and apparatus for capturing motion of dynamic object
US8514221B2 (en)2010-01-052013-08-20Apple Inc.Working with 3D objects
US20110169726A1 (en)2010-01-082011-07-14Microsoft CorporationEvolving universal gesture sets
US20110173574A1 (en)2010-01-082011-07-14Microsoft CorporationIn application gesture interpretation
US8631355B2 (en)2010-01-082014-01-14Microsoft CorporationAssigning gesture dictionaries
US7961174B1 (en)2010-01-152011-06-14Microsoft CorporationTracking groups of users in motion capture system
US20110181509A1 (en)2010-01-262011-07-28Nokia CorporationGesture Control
RU2422878C1 (en)2010-02-042011-06-27Владимир Валентинович ДевятковMethod of controlling television using multimodal interface
US20110193778A1 (en)2010-02-052011-08-11Samsung Electronics Co., Ltd.Device and method for controlling mouse pointer
US8957857B2 (en)2010-02-052015-02-17Samsung Electronics Co., LtdDevice and method for controlling mouse pointer
US8659658B2 (en)2010-02-092014-02-25Microsoft CorporationPhysical interaction zone for gesture-based user interfaces
US20110213664A1 (en)2010-02-282011-09-01Osterhout Group, Inc.Local advertising content on an interactive head-mounted eyepiece
US20140063055A1 (en)2010-02-282014-03-06Osterhout Group, Inc.Ar glasses specific user interface and control interface based on a connected external device type
US20110228978A1 (en)2010-03-182011-09-22Hon Hai Precision Industry Co., Ltd.Foreground object detection system and method
CN102201121A (en)2010-03-232011-09-28鸿富锦精密工业(深圳)有限公司System and method for detecting article in video scene
WO2011119154A1 (en)2010-03-242011-09-29Hewlett-Packard Development Company, L.P.Gesture mapping for display device
EP2369443A2 (en)2010-03-252011-09-28User Interface in Sweden ABSystem and method for gesture detection and feedback
US20110243451A1 (en)2010-03-302011-10-06Hideki OyaizuImage processing apparatus and method, and program
US20150234569A1 (en)2010-03-302015-08-20Harman Becker Automotive Systems GmbhVehicle user interface unit for a vehicle electronic device
US20110251896A1 (en)2010-04-092011-10-13Affine Systems, Inc.Systems and methods for matching an advertisement to a video
CN201859393U (en)2010-04-132011-06-08任峰Three-dimensional gesture recognition box
EP2378488A2 (en)2010-04-192011-10-19Ydreams - Informática, S.a.Various methods and apparatuses for achieving augmented reality
US20130038694A1 (en)2010-04-272013-02-14Sanjay NichaniMethod for moving object detection using an image sensor and structured light
US9119670B2 (en)2010-04-282015-09-01Ryerson UniversitySystem and methods for intraoperative guidance feedback
US20110267259A1 (en)2010-04-302011-11-03Microsoft CorporationReshapable connector with variable rigidity
US20110267265A1 (en)2010-04-302011-11-03Verizon Patent And Licensing, Inc.Spatial-input-based cursor projection systems and methods
CN102236412A (en)2010-04-302011-11-09宏碁股份有限公司 Three-dimensional gesture recognition system and vision-based gesture recognition method
GB2480140A (en)2010-05-042011-11-09Timocco LtdTracking and Mapping an Object to a Target
US20130057469A1 (en)2010-05-112013-03-07Nippon Systemware Co LtdGesture recognition device, method, program, and computer-readable medium upon which program is stored
US20110289456A1 (en)2010-05-182011-11-24Microsoft CorporationGestures And Gesture Modifiers For Manipulating A User-Interface
US20110289455A1 (en)2010-05-182011-11-24Microsoft CorporationGestures And Gesture Recognition For Manipulating A User-Interface
US20110286676A1 (en)2010-05-202011-11-24Edge3 Technologies LlcSystems and related methods for three dimensional gesture recognition in vehicles
US20110299737A1 (en)2010-06-042011-12-08Acer IncorporatedVision-based hand movement recognition system and method thereof
US20110304650A1 (en)2010-06-092011-12-15The Boeing CompanyGesture-Based Human Machine Interface
US20110304600A1 (en)2010-06-112011-12-15Seiko Epson CorporationOptical position detecting device and display device with position detecting function
US20110310220A1 (en)2010-06-162011-12-22Microsoft CorporationDepth camera illuminator with superluminescent light-emitting diode
US20110314427A1 (en)2010-06-182011-12-22Samsung Electronics Co., Ltd.Personalization using custom gestures
US20110310007A1 (en)2010-06-222011-12-22Microsoft CorporationItem navigation using motion-capture data
US8582809B2 (en)2010-06-282013-11-12Robert Bosch GmbhMethod and device for detecting an interfering object in a camera image
US20110317871A1 (en)2010-06-292011-12-29Microsoft CorporationSkeletal joint recognition and tracking system
WO2012027422A2 (en)2010-08-242012-03-01Qualcomm IncorporatedMethods and apparatus for interacting with an electronic device application by moving an object in the air over an electronic device display
US20130241832A1 (en)2010-09-072013-09-19Zrro Technologies (2009) Ltd.Method and device for controlling the behavior of virtual objects on a display
US8842084B2 (en)2010-09-082014-09-23Telefonaktiebolaget L M Ericsson (Publ)Gesture-based object manipulation methods and devices
US20120068914A1 (en)2010-09-202012-03-22Kopin CorporationMiniature communications gateway for head mounted display
WO2012039140A1 (en)2010-09-222012-03-29島根県Operation input apparatus, operation input method, and program
US20130181897A1 (en)2010-09-222013-07-18Shimane Prefectural GovernmentOperation input apparatus, operation input method, and program
US20130187952A1 (en)2010-10-102013-07-25Rafael Advanced Defense Systems Ltd.Network-based real time registered augmented reality for mobile devices
CN101951474A (en)2010-10-122011-01-19冠捷显示科技(厦门)有限公司Television technology based on gesture control
US20120098744A1 (en)2010-10-212012-04-26Verizon Patent And Licensing, Inc.Systems, methods, and apparatuses for spatial input associated with a display
US20130208948A1 (en)2010-10-242013-08-15Rafael Advanced Defense Systems Ltd.Tracking and identification of a moving object from a moving sensor using a 3d model
CN102053702A (en)2010-10-262011-05-11南京航空航天大学Dynamic gesture control system and method
US8817087B2 (en)2010-11-012014-08-26Robert Bosch GmbhRobust video-based handwriting and gesture recognition for in-car applications
US20120113223A1 (en)2010-11-052012-05-10Microsoft CorporationUser Interaction in Augmented Reality
US20120159380A1 (en)2010-12-202012-06-21Kocienda Kenneth LDevice, Method, and Graphical User Interface for Navigation of Concurrently Open Software Applications
US20120163675A1 (en)2010-12-222012-06-28Electronics And Telecommunications Research InstituteMotion capture apparatus and method
US20120270654A1 (en)2011-01-052012-10-25Qualcomm IncorporatedMethod and apparatus for scaling gesture recognition to physical dimensions of a user
US8929609B2 (en)2011-01-052015-01-06Qualcomm IncorporatedMethod and apparatus for scaling gesture recognition to physical dimensions of a user
US20120194517A1 (en)2011-01-312012-08-02Microsoft CorporationUsing a Three-Dimensional Environment Model in Gameplay
US20130307935A1 (en)2011-02-012013-11-21National University Of SingaporeImaging system and method
US20130321265A1 (en)*2011-02-092013-12-05Primesense Ltd.Gaze-Based Display Control
US20140222385A1 (en)2011-02-252014-08-07Smith Heimann GmbhImage reconstruction based on parametric models
US20120223959A1 (en)2011-03-012012-09-06Apple Inc.System and method for a touchscreen slider with toggle control
US20140071069A1 (en)2011-03-292014-03-13Glen J. AndersonTechniques for touch and non-touch user interaction input
US20120250936A1 (en)2011-03-312012-10-04Smart Technologies UlcInteractive input system and method
US9182838B2 (en)2011-04-192015-11-10Microsoft Technology Licensing, LlcDepth camera-based relative gesture detection
US20120274781A1 (en)2011-04-292012-11-01Siemens CorporationMarginal space learning for multi-person tracking over mega pixel imagery
US20120281873A1 (en)2011-05-052012-11-08International Business Machines CorporationIncorporating video meta-data in 3d models
US20120293667A1 (en)2011-05-162012-11-22Ut-Battelle, LlcIntrinsic feature-based pose measurement for imaging motion compensation
US20120314030A1 (en)2011-06-072012-12-13International Business Machines CorporationEstimation of object properties in 3d world
US20140095119A1 (en)2011-06-132014-04-03Industry-Academic Cooperation Foundation, Yonsei UniversitySystem and method for location-based construction project management
US20120320080A1 (en)2011-06-142012-12-20Microsoft CorporationMotion based virtual object navigation
US20130019204A1 (en)2011-07-142013-01-17Microsoft CorporationAdjusting content attributes through actions on context based menu
US20130033483A1 (en)2011-08-012013-02-07Soungmin ImElectronic device for displaying three-dimensional image and method of using the same
US20150363070A1 (en)2011-08-042015-12-17Itay KatzSystem and method for interfacing with a device via a 3d display
US8891868B1 (en)2011-08-042014-11-18Amazon Technologies, Inc.Recognizing gestures captured by video
US20130044951A1 (en)2011-08-192013-02-21Der-Chun CherngMoving object detection method using image contrast enhancement
US20140132738A1 (en)2011-08-232014-05-15Panasonic CorporationThree-dimensional image capture device, lens control device and program
US20130050425A1 (en)2011-08-242013-02-28Soungmin ImGesture-based user interface method and apparatus
US20140225826A1 (en)2011-09-072014-08-14Nitto Denko CorporationMethod for detecting motion of input body and input device using same
US8803885B1 (en)2011-09-072014-08-12Infragistics, Inc.Method for evaluating spline parameters for smooth curve sampling
US20140055385A1 (en)2011-09-272014-02-27Elo Touch Solutions, Inc.Scaling of gesture based input
US20130086531A1 (en)2011-09-292013-04-04Kabushiki Kaisha ToshibaCommand issuing device, method and computer program product
US20130097566A1 (en)2011-10-172013-04-18Carl Fredrik Alexander BERGLUNDSystem and method for displaying items on electronic devices
US20150193669A1 (en)2011-11-212015-07-09Pixart Imaging Inc.System and method based on hybrid biometric detection
US8235529B1 (en)2011-11-302012-08-07Google Inc.Unlocking a screen using eye tracking information
US20130148852A1 (en)2011-12-082013-06-13Canon Kabushiki KaishaMethod, apparatus and system for tracking an object in a sequence of images
US20150009149A1 (en)2012-01-052015-01-08California Institute Of TechnologyImaging surround system for touch-free display control
US20150097772A1 (en)2012-01-062015-04-09Thad Eugene StarnerGaze Signal Based on Physical Characteristics of the Eye
US8878749B1 (en)2012-01-062014-11-04Google Inc.Systems and methods for position estimation
US20150227795A1 (en)2012-01-062015-08-13Google Inc.Object Outlining to Initiate a Visual Search
US20150084864A1 (en)2012-01-092015-03-26Google Inc.Input Method
WO2013109609A2 (en)2012-01-172013-07-25Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US20140369558A1 (en)2012-01-172014-12-18David HolzSystems and methods for machine control
US20130182079A1 (en)2012-01-172013-07-18OcuspecMotion capture using cross-sections of an object
WO2013109608A2 (en)2012-01-172013-07-25Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US20140139641A1 (en)2012-01-172014-05-22David HolzSystems and methods for capturing motion in three-dimensional space
US20160086046A1 (en)2012-01-172016-03-24Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9070019B2 (en)2012-01-172015-06-30Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US8693731B2 (en)2012-01-172014-04-08Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US8638989B2 (en)2012-01-172014-01-28Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US20130182897A1 (en)2012-01-172013-07-18David HolzSystems and methods for capturing motion in three-dimensional space
US20140177913A1 (en)2012-01-172014-06-26David HolzEnhanced contrast for object detection and characterization by optical imaging
US20130191911A1 (en)2012-01-202013-07-25Apple Inc.Device, Method, and Graphical User Interface for Accessing an Application in a Locked Device
US20130194173A1 (en)2012-02-012013-08-01Ingeonix CorporationTouch free control of electronic systems and associated methods
US8854433B1 (en)2012-02-032014-10-07Aquifi, Inc.Method and system enabling natural user interface gestures with an electronic system
US20130222640A1 (en)2012-02-272013-08-29Samsung Electronics Co., Ltd.Moving image shooting apparatus and method of using a camera device
US20130222233A1 (en)2012-02-292013-08-29Korea Institute Of Science And TechnologySystem and method for implementing 3-dimensional user interface
US20130239059A1 (en)2012-03-062013-09-12Acer IncorporatedTouch screen folder control
US8930852B2 (en)2012-03-062015-01-06Acer IncorporatedTouch screen folder control
US20130258140A1 (en)2012-03-102013-10-03Digitaloptics CorporationMiniature MEMS Autofocus Zoom Camera Module
US20140375547A1 (en)2012-03-132014-12-25Eyesight Mobile Technologies Ltd.Touch free user interface
US9122354B2 (en)2012-03-142015-09-01Texas Instruments IncorporatedDetecting wave gestures near an illuminated surface
US20130252691A1 (en)2012-03-202013-09-26Ilias AlexopoulosMethods and systems for a gesture-controlled lottery terminal
US20130283213A1 (en)2012-03-262013-10-24Primesense Ltd.Enhanced virtual touchpad
US8942881B2 (en)2012-04-022015-01-27Google Inc.Gesture-based automotive controls
US20130257736A1 (en)2012-04-032013-10-03Wistron CorporationGesture sensing apparatus, electronic system having gesture input function, and gesture determining method
US20130271397A1 (en)2012-04-162013-10-17Qualcomm IncorporatedRapid gesture re-engagement
US20130300831A1 (en)2012-05-112013-11-14Loren MavromatisCamera scene fitting of real world scenes
US20150016777A1 (en)2012-06-112015-01-15Magic Leap, Inc.Planar waveguide apparatus with diffraction element(s) and system employing same
US20140002365A1 (en)2012-06-282014-01-02Intermec Ip Corp.Dual screen display for mobile computing device
US20140010441A1 (en)2012-07-092014-01-09Qualcomm IncorporatedUnsupervised movement detection and gesture recognition
US20150153833A1 (en)2012-07-132015-06-04Softkinetic SoftwareMethod and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US20140015831A1 (en)2012-07-162014-01-16Electronics And Telecommunications Research InstitudeApparatus and method for processing manipulation of 3d virtual object
US20150323785A1 (en)2012-07-272015-11-12Nissan Motor Co., Ltd.Three-dimensional object detection device and foreign matter detection device
US20140037135A1 (en)2012-07-312014-02-06Omek Interactive, Ltd.Context-driven adjustment of camera parameters
US20140055396A1 (en)2012-08-272014-02-27Microchip Technology IncorporatedInput Device with Hand Posture Control
US20140064566A1 (en)2012-08-292014-03-06Xerox CorporationHeuristic-based approach for automatic payment gesture classification and detection
US9124778B1 (en)2012-08-292015-09-01Nomi CorporationApparatuses and methods for disparity-based tracking and analysis of objects in a region of interest
US20140063060A1 (en)2012-09-042014-03-06Qualcomm IncorporatedAugmented reality surface segmentation
US20140085203A1 (en)2012-09-262014-03-27Seiko Epson CorporationVideo image display system and head mounted display
US20140098018A1 (en)2012-10-042014-04-10Microsoft CorporationWearable sensor for tracking articulated body-parts
US20150293597A1 (en)2012-10-312015-10-15Pranav MISHRAMethod, Apparatus and Computer Program for Enabling a User Input Command to be Performed
US20140125813A1 (en)2012-11-082014-05-08David HolzObject detection and tracking with variable-field illumination devices
US20140125775A1 (en)2012-11-082014-05-08Leap Motion, Inc.Three-dimensional image sensors
US20140134733A1 (en)2012-11-132014-05-15The Board Of Trustees Of The Leland Stanford Junior UniversityChemically defined production of cardiomyocytes from pluripotent stem cells
US20140139425A1 (en)2012-11-192014-05-22Sony CorporationImage processing apparatus, image processing method, image capture apparatus and computer program
US20150304593A1 (en)2012-11-272015-10-22Sony CorporationDisplay apparatus, display method, and computer program
US20140157135A1 (en)2012-12-032014-06-05Samsung Electronics Co., Ltd.Method and mobile terminal for controlling bluetooth low energy device
US20140161311A1 (en)2012-12-102014-06-12Hyundai Motor CompanySystem and method for object image detecting
US20140168062A1 (en)2012-12-132014-06-19Eyesight Mobile Technologies Ltd.Systems and methods for triggering actions based on touch-free gesture detection
US20140176420A1 (en)2012-12-262014-06-26Futurewei Technologies, Inc.Laser Beam Based Gesture Control Interface for Mobile Devices
US20140189579A1 (en)2013-01-022014-07-03Zrro Technologies (2009) Ltd.System and method for controlling zooming and/or scrolling
US20140192024A1 (en)2013-01-082014-07-10Leap Motion, Inc.Object detection and tracking with audio and optical signals
US9182812B2 (en)2013-01-082015-11-10AyotleVirtual sensor systems and methods
US9501152B2 (en)2013-01-152016-11-22Leap Motion, Inc.Free-space user interface and control using virtual constructs
US20140201666A1 (en)2013-01-152014-07-17Raffi BedikianDynamic, free-space user interactions for machine control
US9459697B2 (en)2013-01-152016-10-04Leap Motion, Inc.Dynamic, free-space user interactions for machine control
US10042430B2 (en)2013-01-152018-08-07Leap Motion, Inc.Free-space user interface and control using virtual constructs
US10739862B2 (en)2013-01-152020-08-11Ultrahaptics IP Two LimitedFree-space user interface and control using virtual constructs
US11353962B2 (en)2013-01-152022-06-07Ultrahaptics IP Two LimitedFree-space user interface and control using virtual constructs
US20140201689A1 (en)2013-01-152014-07-17Raffi BedikianFree-space user interface and control using virtual constructs
US11740705B2 (en)2013-01-152023-08-29Ultrahaptics IP Two LimitedMethod and system for controlling a machine according to a characteristic of a control object
US11874970B2 (en)2013-01-152024-01-16Ultrahaptics IP Two LimitedFree-space user interface and control using virtual constructs
US20140223385A1 (en)2013-02-052014-08-07Qualcomm IncorporatedMethods for system engagement via 3d object detection
US20140225918A1 (en)2013-02-142014-08-14Qualcomm IncorporatedHuman-body-gesture-based region and volume selection for hmd
US20140236529A1 (en)2013-02-182014-08-21Motorola Mobility LlcMethod and Apparatus for Determining Displacement from Acceleration Data
US20140240215A1 (en)2013-02-262014-08-28Corel CorporationSystem and method for controlling a user interface utility using a vision system
US20140240225A1 (en)2013-02-262014-08-28Pointgrab Ltd.Method for touchless control of a device
US20140248950A1 (en)2013-03-012014-09-04Martin Tosas BautistaSystem and method of interaction for mobile devices
US20140249961A1 (en)2013-03-042014-09-04Adidas AgInteractive cubicle and method for determining a body shape
US9056396B1 (en)2013-03-052015-06-16AutofussProgramming of a robotic arm using a motion capture system
US20140253785A1 (en)2013-03-072014-09-11Mediatek Inc.Auto Focus Based on Analysis of State or State Change of Image Content
US20140253512A1 (en)2013-03-112014-09-11Hitachi Maxell, Ltd.Manipulation detection apparatus, manipulation detection method, and projector
US9389779B2 (en)2013-03-142016-07-12Intel CorporationDepth-based user interface gesture control
US20150253428A1 (en)2013-03-152015-09-10Leap Motion, Inc.Determining positional information for an object in space
US8738523B1 (en)2013-03-152014-05-27State Farm Mutual Automobile Insurance CompanySystems and methods to identify and profile a vehicle operator
US8954340B2 (en)2013-03-152015-02-10State Farm Mutual Automobile Insurance CompanyRisk evaluation based on vehicle operator behavior
US20140267098A1 (en)2013-03-152014-09-18Lg Electronics Inc.Mobile terminal and method of controlling the mobile terminal
US20140282282A1 (en)2013-03-152014-09-18Leap Motion, Inc.Dynamic user interactions for display control
US20140307920A1 (en)2013-04-122014-10-16David HolzSystems and methods for tracking occluded objects in three-dimensional space
US20140320408A1 (en)2013-04-262014-10-30Leap Motion, Inc.Non-tactile interface systems and methods
US20140344762A1 (en)2013-05-142014-11-20Qualcomm IncorporatedAugmented reality (ar) capture & play
US20140364209A1 (en)2013-06-072014-12-11Sony Corporation Entertainment America LLCSystems and Methods for Using Reduced Hops to Generate an Augmented Virtual Reality Scene Within A Head Mounted System
US20140364212A1 (en)2013-06-082014-12-11Sony Computer Entertainment Inc.Systems and methods for transitioning between transparent mode and non-transparent mode in a head mounted dipslay
WO2014208087A1 (en)2013-06-272014-12-31パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカMotion sensor device having plurality of light sources
US20150003673A1 (en)2013-07-012015-01-01Hand Held Products, Inc.Dimensioning system
US20150022447A1 (en)2013-07-222015-01-22Leap Motion, Inc.Non-linear motion capture using frenet-serret frames
US20150029091A1 (en)2013-07-292015-01-29Sony CorporationInformation presentation apparatus and information processing system
US9342160B2 (en)2013-07-312016-05-17Microsoft Technology Licensing, LlcErgonomic physical interaction zone cursor mapping
US20150040040A1 (en)2013-08-052015-02-05Alexandru BalanTwo-hand interaction with natural user interface
US11567578B2 (en)2013-08-092023-01-31Ultrahaptics IP Two LimitedSystems and methods of free-space gestural interaction
US10281987B1 (en)2013-08-092019-05-07Leap Motion, Inc.Systems and methods of free-space gestural interaction
US10831281B2 (en)2013-08-092020-11-10Ultrahaptics IP Two LimitedSystems and methods of free-space gestural interaction
GB2519418A (en)2013-08-212015-04-22Sony Comp Entertainment EuropeHead-mountable apparatus and systems
WO2015026707A1 (en)2013-08-222015-02-26Sony CorporationClose range natural user interface system and method of operation thereof
US8922590B1 (en)2013-10-012014-12-30Myth Innovations, Inc.Augmented reality interface and method of use
US20150103004A1 (en)2013-10-162015-04-16Leap Motion, Inc.Velocity field interaction for free space gesture interface and control
US20150116214A1 (en)2013-10-292015-04-30Anders Grunnet-JepsenGesture based human computer interaction
US20150115802A1 (en)2013-10-312015-04-30General Electric CompanyCustomizable modular luminaire
US20150172539A1 (en)2013-12-172015-06-18Amazon Technologies, Inc.Distributing processing for imaging processing
US20150205358A1 (en)2014-01-202015-07-23Philip Scott LyrenElectronic Device with Touchless User Interface
US20150205400A1 (en)2014-01-212015-07-23Microsoft CorporationGrip Detection
US20150206321A1 (en)2014-01-232015-07-23Michael J. ScavezzeAutomated content scrolling
US20150258432A1 (en)2014-03-142015-09-17Sony Computer Entertainment Inc.Gaming device with volumetric sensing
US20150261291A1 (en)2014-03-142015-09-17Sony Computer Entertainment Inc.Methods and Systems Tracking Head Mounted Display (HMD) and Calibrations for HMD Headband Adjustments
US20150309629A1 (en)2014-04-282015-10-29Qualcomm IncorporatedUtilizing real world objects for user input
US20160062573A1 (en)2014-09-022016-03-03Apple Inc.Reduced size user interface
US20160093105A1 (en)2014-09-302016-03-31Sony Computer Entertainment Inc.Display of text information on a head-mounted display
US20170102791A1 (en)2015-10-092017-04-13Zspace, Inc.Virtual Plane in a Stylus Based Stereoscopic Display System

Non-Patent Citations (42)

* Cited by examiner, † Cited by third party
Title
Arthington, et al., "Cross-section Reconstruction During Uniaxial Loading," Measurement Science and Technology, vol. 20, No. 7, Jun. 10, 2009, Retrieved from the Internet: http:iopscience.iop.org/0957-0233/20/7/075701, pp. 1-9.
Ballan et al., "Lecture Notes Computer Science: 12th European Conference on Computer Vision: Motion Capture of Hands in Action Using Discriminative Salient Points", Oct. 7-13, 2012 [retrieved Jul. 14, 2016], Springer Berlin Heidelberg, vol. 7577, pp. 640-653. Retrieved from the Internet: <http://link.springer.com/chapter/10.1007/978-3-642-33783-3 46>.
Barat et al., "Feature Correspondences From Multiple Views of Coplanar Ellipses", 2nd International Symposium on Visual Computing, Author Manuscript, 2006, 10 pages.
Bardinet, et al., "Fitting of iso-Surfaces Using Superquadrics and Free-Form Deformations" [on-line], Jun. 24-25, 1994 [retrieved Jan. 9, 2014], 1994 Proceedings of IEEE Workshop on Biomedical Image Analysis, Retrieved from the Internet: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=315882&tag=1, pp. 184-193.
Butail, S., et al., "Three-Dimensional Reconstruction of the Fast-Start Swimming Kinematics of Densely Schooling Fish," Journal of the Royal Society interface, Jun. 3, 2011, retrieved from the Internet <http://www.ncbi.nlm.nih.gov/pubmed/21642367>, pp. 0, 1-12.
Cheikh et al., "Multipeople Tracking Across Multiple Cameras", International Journal on New Computer Architectures and Their Applications (IJNCAA), vol. 2, No. 1, 2012, pp. 23-33.
Chung, et al., "Recovering LSHGCs and SHGCs from Stereo," International Journal of Computer Vision, vol. 20, No. 1/2, 1996, pp. 43-58.
Cui et al., "Applications of Evolutionary Computing: Vision-Based Hand Motion Capture Using Genetic Algorithm", 2004 [retrieved Jul. 15, 2016], Springer Berlin Heidelberg, vol. 3005 of LNCS, pp. 289-300. Retrieved from the Internet: <http://link.springer.com/chapter/10.1007/978-3-540-24653-4_30>.
Cumani, A., et al., "Recovering the 3D Structure of Tubular Objects from Stereo Silhouettes," Pattern Recognition, Elsevier, GB, vol. 30, No. 7, Jul. 1, 1997, 9 pages.
Davis et al., "Toward 3-D Gesture Recognition", International Journal of Pattern Recognition and Artificial Intelligence, vol. 13, No. 03, 1999, pp. 381-393.
Delamarre et al., "Finding Pose of Hand in Video Images: A Stereo-based Approach", Apr. 14-16, 1998 [retrieved Jul. 15, 2016], Third IEEE Intern Conf on Auto Face and Gesture Recog, pp. 585-590. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=671011&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D671011>.
Di Zenzo, S., et al., "Advances in Image Segmentation," Image and Vision Computing, Elsevier, Guildford, GBN, vol. 1, No. 1, Copyright Butterworth & Co Ltd., Nov. 1, 1983, pp. 196-210.
Dombeck, D., et al., "Optical Recording of Action Potentials with Second-Harmonic Generation Microscopy," The Journal of Neuroscience, Jan. 28, 2004, vol. 24(4): pp. 999-1003.
Forbes, K., et al., "Using Silhouette Consistency Constraints to Build 3D Models," University of Cape Town, Copyright De Beers 2003, Retrieved from the internet: <http://www.dip.ee.uct.ac.za/˜kforbes/Publications/Forbes2003Prasa.pdf> on Jun. 17, 2013, 6 pages.
Fukui et al. "Multiple Object Tracking System with Three Level Continuous Processes" IEEE, 1992, pp. 19-27.
Gorce et al., "Model-Based 3D Hand Pose Estimation from Monocular Video", Feb. 24, 2011 [retrieved Jul. 15, 2016], IEEE Transac Pattern Analysis and Machine Intell, vol. 33, Issue: 9, pp. 1793-1805, Retri Internet: <http://ieeexplore.ieee.org/xpl/logi n .jsp ?tp=&arnu mber=571 9617 &u rl=http%3A %2 F%2 Fieeexplore. ieee.org%2Fxpls%2 Fabs all.isp%3Farnumber%3D5719617>.
Guo et al., Featured Wand for 3D Interaction, Jul. 2-5, 2007 [retrieved Jul. 15, 2016], 2007 IEEE International Conference on Multimedia and Expo, pp. 2230-2233. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.isp?tp=&arnumber=4285129&tag=1&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4285129%26tag%3D1>.
Heikkila, J., "Accurate Camera Calibration and Feature Based 3-D Reconstruction from Monocular Image Sequences", Infotech Oulu and Department of Electrical Engineering, University of Oulu, 1997, 126 pages.
Kanhangad, V., et al., "A Unified rramework for Contactless Hand Verification," IEEE Transactions on Information Forensics and Security, IEEE, Piscataway, NJ, US., vol. 6, No. 3, Sep. 1, 2011, pp. 1014-1027.
Kim, et al., "Development of an Orthogonal Double-Image Processing Algorithm to Measure Bubble," Department of Nuclear Engineering and Technology, Seoul National University Korea, vol. 39 No. 4, Published Jul. 6, 2007, pp. 313-326.
Kulesza, et al., "Arrangement of a Multi Stereo Visual Sensor System for a Human Activities Space," Source: Stereo Vision, Book edited by: Dr. Asim Bhatti, ISBN 978-953-7619-22-0, Copyright Nov. 2008, I-Tech, Vienna, Austria, www.intechopen.com, pp. 153-173.
Matsuyama et al. "Real-Time Dynamic 3-D Object Shape Reconstruction and High-Fidelity Texture Mapping for 3-D Video," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 3, Mar. 2004, pp. 357-369.
May, S., et al., "Robust 3D-Mapping with Time-of-Flight Cameras," 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, Piscataway, NJ, USA, Oct. 10, 2009, pp. 1673-1678.
Melax et al., "Dynamics Based 3D Skeletal Hand Tracking", May 29, 2013 [retrieved Jul. 14, 2016], Proceedings of Graphics Interface, 2013, pp. 63-70. Retrived from the Internet: <http://dl.acm.org/citation.cfm?id=2532141>.
Mendez, et al., "Importance Masks for Revealing Occluded Objects in Augmented Reality," Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, 2 pages, ACM, 2009.
Oka et al., "Real-Time Fingertip Tracking and Gesture Recognition", Nov./Dec. 2002 [retrieved Jul. 15, 2016], IEEE Computer Graphics and Applications, vol. 22, Issue: 6, pp. 64-71. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1046630&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabsall.jsp%3Farnumber%3D1046630>.
Olsson, K., et al., "Shape from Silhouette Scanner—Creating a Digital 3D Model of a Real Object by Analyzing Photos From Multiple Views," University of Linkoping, Sweden, Copyright VCG 2001, Retrieved from the Internet: <http://liu. diva-portal.org/smash/get/diva2:18671/FULLTEXT01> on Jun. 17, 2013, 52 pages.
Pavlovic, V.I., et al., "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 7, Jul. 1997, pp. 677-695.
Pedersini, et al., Accurate Surface Reconstruction from Apparent Contours, Sep. 5-8, 2000 European Signal Processing Conference EUSIPCO 2000, vol. 4, Retrieved from the Internet: http://home.deib.poliml.it/sarti/CV_and_publications.html, pp. 1-4.
Rasmussen, Matihew K., "An Analytical Framework for the Preparation and Animation of a Virtual Mannequin forthe Purpose of Mannequin-Clothing Interaction Modeling", A Thesis Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree in Civil and Environmental Engineering in the Graduate College of the University of Iowa, Dec. 2008, 98 pages.
Schaar, R., VCNL4020 Vishay Semiconductors. Application Note [online]. Extended Detection Range with VCNL Family of Proximity Sensor Vishay Intertechnology, Inc, Doc No. 84225, Revised Oct. 25, 2013 [retrieved Mar. 4, 2014]. Retrieved from the Internet: <www.vishay.com>. 4 pages.
Schlattmann et al., "Markerless 4 gestures 6 DOF real-time visual tracking of the human hand with automatic Initialization", 2007 [retrieved Jul. 15, 2016], Eurographics 2007, vol. 26, No. 3, 10 pages, Retrieved from the Internet: <http://cg.cs.uni-bonn.de/aigaion2root/attachments/schlattmann-2007-markerless.pdf>.
Texas Instruments, "4-Channel, 12-Bit, 80-MSPS ADC," VSP5324, Revised Nov. 2012, Texas Instruments Incorporated, 55 pages.
Texas Instruments, "QVGA 3D Time-of-Flight Sensor," Product Overview: OPT 8140, Dec. 2013, Texas Instruments Incorporated, 10 pages.
Texas Instruments, "Time-of-Flight Controller (TFC)," Product Overview; OPT9220, Jan. 2014, Texas Instruments Incorporated, 43 pages.
U.S. Appl. No. 17/409,767, filed Aug. 23, 2021, 2021382563, Dec. 9, 2021.
U.S. Appl. No. 18/094,272, filed Jan. 6, 2023, 20230161415, May 25, 2023.
U.S. Appl. No. 18/506,009, filed Nov. 9, 2023, 20240077950, Mar. 7, 2024.
VCNL4020 Vishay Semiconductors. Datasheet [online]. Vishay Intertechnology, Inc, Doc No. 83476, Rev. 1.3, Oct. 29, 2013 [retrieved Mar. 4, 2014]. Retrieved from the Internet: <www.vishay.com>. 16 pages.
Wang et al., "Tracking of Deformable Hand in Real Time as Continuous Input for Gesture-based Interaction", Jan. 28, 2007 [retrieved Jul. 15, 2016], Proceedings of the 12th International Conference on Intelligent User Interfaces, pp. 235-242. Retrieved fromthe Internet: <http://dl.acm.org/citation.cfm?id=1216338>.
Wu, Y., et al., "Vision-Based Gesture Recognition: A Review," Beckman Institute, Copyright 1999, pp. 103-115.
Zhao et al., "Combining Marker-Based Mocap and RGB-D Camera for Acquiring High-Fidelity Hand Motion Data", Jul. 29, 2012 [retrieved Jul. 15, 2016], Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 33-42, Retrieved from the Internet: <http://dl.acm.org/citation.cfm?id=2422363>.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230298292A1 (en)*2022-01-312023-09-21Fujifilm Business Innovation Corp.Information processing apparatus, non-transitory computer readable medium storing program, and information processing method

Also Published As

Publication numberPublication date
US20170017306A1 (en)2017-01-19
US11243612B2 (en)2022-02-08
US20190155394A1 (en)2019-05-23
US9459697B2 (en)2016-10-04
US11740705B2 (en)2023-08-29
US20220236808A1 (en)2022-07-28
US10139918B2 (en)2018-11-27
US20240061511A1 (en)2024-02-22
US20140201666A1 (en)2014-07-17
US20250130648A1 (en)2025-04-24

Similar Documents

PublicationPublication DateTitle
US12204695B2 (en)Dynamic, free-space user interactions for machine control
US11874970B2 (en)Free-space user interface and control using virtual constructs
US11269481B2 (en)Dynamic user interactions for display control and measuring degree of completeness of user gestures
US11567578B2 (en)Systems and methods of free-space gestural interaction
US12086323B2 (en)Determining a primary control mode of controlling an electronic device using 3D gestures or using control manipulations from a user manipulable input device
WO2014113507A1 (en)Dynamic user interactions for display control and customized gesture interpretation

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

ASAssignment

Owner name:LEAP MOTION, INC., CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEDIKIAN, RAFFI;MARSDEN, JONATHAN;MERTENS, KEITH;AND OTHERS;REEL/FRAME:067297/0626

Effective date:20140331

Owner name:ULTRAHAPTICS IP TWO LIMITED, UNITED KINGDOM

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LMI LIQUIDATING CO. LLC;REEL/FRAME:067298/0184

Effective date:20190930

Owner name:LMI LIQUIDATING CO. LLC, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEAP MOTION, INC.;REEL/FRAME:067297/0892

Effective date:20190930

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

ZAABNotice of allowance mailed

Free format text:ORIGINAL CODE: MN/=.

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp