Movatterモバイル変換


[0]ホーム

URL:


US10129510B2 - Initiating human-machine interaction based on visual attention - Google Patents

Initiating human-machine interaction based on visual attention
Download PDF

Info

Publication number
US10129510B2
US10129510B2US15/349,899US201615349899AUS10129510B2US 10129510 B2US10129510 B2US 10129510B2US 201615349899 AUS201615349899 AUS 201615349899AUS 10129510 B2US10129510 B2US 10129510B2
Authority
US
United States
Prior art keywords
user
target area
input engine
head
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US15/349,899
Other versions
US20170242478A1 (en
Inventor
Tao Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co LtdfiledCriticalSamsung Electronics Co Ltd
Priority to US15/349,899priorityCriticalpatent/US10129510B2/en
Assigned to SAMSUNG ELECTRONICS CO., LTD.reassignmentSAMSUNG ELECTRONICS CO., LTD.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MA, Tao
Priority to KR1020170022366Aprioritypatent/KR20170097585A/en
Publication of US20170242478A1publicationCriticalpatent/US20170242478A1/en
Application grantedgrantedCritical
Publication of US10129510B2publicationCriticalpatent/US10129510B2/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A device for interacting with a user is presented. The device includes a target area, a sensor coupled to the target area, wherein the sensor detects whether a human is present in a predefined proximity region and detects a direction of visual attention given by the human in the predefined proximity region, a processor coupled to the sensor and making a determination that the user's visual attention is in a direction of the target area for a minimum visual contact period, and an input engine that is activated based on the determination.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S)
This application claims the benefit of U.S. Provisional Application No. 62/297,076 filed on Feb. 18, 2016, which is incorporated by reference herein.
BACKGROUND
Increasing popularity of portable electronics demands that electronic devices become capable of handling more functions. One of the areas of development is human-machine interaction based on voice or motion. When a user provides a request to a machine by providing a voice command instead of touching or typing on a visual display, a user's interaction with a machine becomes more similar to human-to-human interaction, therefore being more natural and intuitive.
One of the challenges in implementing the human-machine voice communication is knowing when the machine should be waiting for a user command. As it is seldom the case that a user is constantly and continuously talking to his machine, it is not efficient for the machine to be constantly listening for commands. However, it is equally important that the machine not miss a communication from a user when it comes. Existing voice interaction engines such as AMAZON ECHO® and GOOGLE NOW™ address this problem by requiring a trigger word from the user as a signal to the machine to receive a user command of the trigger word. This trigger-word mechanism prevents false triggering and saves processing power. However, it has the disadvantage of feeling unnatural to the user, who has to say the trigger word every time he wants to interact with his machine.
Apple's Siri voice engine does not require a trigger word but instead relies on a button touch to start waiting for a user command. While some users may prefer this touch-based initiation to trigger words, neither option is ideal as they both require the user to do something that he would not do when interacting with another human. A more natural way of initiating machine interaction without wasting processing power or compromising accuracy is desired.
SUMMARY
In one aspect, the present disclosure pertains to a device for interacting with a user. The device includes a target area, a sensor coupled to the target area, a processor coupled to the sensor, and an input engine. The sensor detects whether a human is present in a predefined proximity region and detects a direction of visual attention given by the human in the predefined proximity region. The processor makes a determination that the user's visual attention is in a direction of the target area for a minimum visual contact period, and based on this determination, the input engine is activated.
In another aspect, the present disclosure pertains to a method of transitioning an input engine from sleep mode to interactive mode. The method includes identifying a user eye, determining a direction of user's visual attention based on movement of the eye; and activating an input engine to receive input if the visual attention is in a predefined direction for a minimum visual contact period.
In yet another aspect, the present disclosure pertains to a non-transitory computer-readable medium storing instructions for executing the above method.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 depicts an interactive device according to one embodiment.
FIG. 2 depicts a situation where the user's visual attention is not on the target area.
FIG. 3 depicts a situation where the user's visual attention is on the target area.
FIG. 4 is a flowchart depicting how the device decides to transition from sleep mode to interactive mode according to one embodiment.
FIG. 5 is a flowchart depicting how the device decides to transition from interactive mode to sleep mode according to one embodiment.
FIG. 6 depicts an exemplary device according to one embodiment.
FIG. 7A andFIG. 7B depict an exploded view and a perspective view, respectively, of components within the device in accordance with one embodiment.
FIG. 8A andFIG. 8B depict a rotation range of the exemplary device according to one embodiment.
FIG. 9A andFIG. 9B illustrate a rotation range of the device according to another embodiment.
FIG. 10 depicts an exemplary block diagram of the system architecture according to one embodiment.
FIG. 11 depicts an exemplary block diagram of NLU engine architecture according to one embodiment.
FIG. 12 depicts an exemplary block diagram of hardware architecture of the device according to one embodiment.
FIG. 13 depicts an exemplary block diagram of robotic architecture of the present device according to one embodiment.
FIG. 14 depicts an exemplary flow chart of performing a desired motion by the device according to one embodiment.
FIG. 15 depicts an exemplary code sample for motion API.
FIG. 16 depicts an exemplary timing diagram for servo motor pulse width modulation (PWM) according to one embodiment.
FIG. 17 depicts an exemplary block diagram of the present system according to one embodiment.
FIG. 18 depicts an exemplary diagram of connecting the device to multiple secondary devices according to one embodiment.
FIG. 19 depicts an exemplary diagram of a multi-modality display feature in the device according to one embodiment.
DETAILED DESCRIPTION
The system and method disclosed herein detects human visual attention and uses it to initiate human-machine interaction. With the visual attention-based initiation method, a user would not need to take an unnatural step of manually starting the interaction by saying or doing something he would not do if he were interacting with another human person. The machine may give a signal to the user that it is listening when the machine recognizes the visual attention as being directed at it.
FIG. 1 depicts aninteractive device10 according to one embodiment. As shown, thedevice10 includes atarget area20, asensor30, aprocessor40, amicrophone50, and aspeaker60 connected to one another. Thesensor20 may be any sensor capable of proximity sensing and eye tracking, including but not limited to a camera, an infrared sensor, or a laser sensor. Theprocessor40 may use any suitable computer vision algorithm to determine a user's proximity to thedevice10 and to determine the direction of the user's visual attention. Thetarget area20 may have a display device or some other component that would cause the user to look at it when he wants to interact with thedevice10. For example, where thedevice10 is a robot, thetarget area20 may be made to look like the robot's face with eyes. Thesensor20 may be positioned behind or near thetarget area20 to accurately track the user's visual attention. In some embodiments, there may bemultiple sensors20 positioned in different parts of thedevice10.
FIG. 2 depicts a situation where the user's visual attention is not on thetarget area20. Theprocessor30 of thedevice10 periodically checks to see if a user is looking at it, and if not, it remains in sleep mode.
FIG. 3 depicts a situation where the user's visual attention is on thetarget area20. Upon determining that the user's visual attention is on thetarget area20, the processor transitions thedevice10 from sleep mode to interactive mode. In interactive mode, themicrophone50 is activated to receive user's voice. In the particular example ofFIG. 3, an image of a microphone is shown on thetarget area20 to let the user know that thedevice10 is in interactive mode and listening.
FIG. 4 is a flowchart depicting how thedevice10 decides to transition from sleep mode to interactive mode in accordance with one embodiment. As mentioned before, thesensor20 periodically (e.g., at a regular time interval such as every few seconds) checks to see if there is a potential user in its proximity (at 100). Upon determining that there is a user in proximity (e.g., within a predetermined distance), thesensor20 locates the user's eye(s) and determines the direction of the user's visual attention or gaze (at 110). If the user looks in the direction of thetarget area20 for a preset minimum visual contact period (e.g., 3 seconds) (at 120), processor transitions thedevice10 into interactive mode (at 130). In the interactive mode, the microphone and voice input engine are triggered and thedevice10 waits for a voice command from the user. If, on the other hand, the user is not looking at the device10 (at 120), theprocessor30 continually checks to see if the user's visual attention is now directed at thedevice10.
Upon the device's transition to interactive mode (at 130), a signal may be generated to let the user know that the interactive mode is ON and thedevice10 is listening (at 140). The signal may be visual, such as an image of a microphone being displayed, the eyes of a robot becoming brighter, and the color or brightness of thetarget area20 changing. The signal may be an audio signal, such as a short chime sound or a word “Hi” generated by thespeaker60. The signal may include a movement of a part of thedevice10. For example, where thedevice10 is a robot, the robot may tilt, raise, or turn its head or change its facial expression (e.g., eyes open wider, quick two blinks of the eyes). In an embodiment where there aremultiple sensors20, if the user is behind or to the side of the robot's front face, the robot may turn around to “look at” the user to signal that it is in interactive mode.
Although the description herein focuses on voice interaction, the device and method described herein is not limited to visual attention triggering only voice interaction. In some embodiments, once thedevice10 is in interactive mode, it may be ready to receive and process visual/motion input (e.g., a wink, a wave of a finger, and pointing of a finger) or temperature input as well as audio input. Suitable types of sensors may be incorporated into thedevice10 to allow the desired type of input to be received and processed.
FIG. 5 is a flowchart depicting how thedevice10 decides to transition from interactive mode to sleep mode in accordance with one embodiment. As shown, there may be more than one trigger for transitioning from interactive mode to sleep mode. In the particular embodiment that is depicted, hearing a trigger word like “Bye” or a phrase like “Talk to you later” or “See you later” (at 210) may cause the transition. Thesensor20, which may continually be monitoring the user's proximity even in interactive mode, may detect that the user has walked away outside the predefined interaction distance from the device10 (at 220), and this detection may cause the transition. Also, as mentioned above, themicrophone50 is activated in interactive mode. When no voice is received by themicrophone50 for a predetermined length of time Δt (at 230), theprocessor30 may conclude that the interaction is over and transition to sleep mode (at 240). Depending on the embodiment, one of these conditions being satisfied may trigger the transition to sleep mode or it may take at least two of these conditions being fulfilled for the transition to occur.
In one embodiment, thedevice10 is implemented as a robotic companion device that may include being an emotive personal assistant, a smart home hub, and an Internet Protocol (IP) camera. Thedevice10 may include far-field voice recognition capability and natural language understanding, a knowledge engine to answer different questions in different types of domains (e.g., weather, general knowledge, traffic, sports, and news), an Internet of Things (IOT) hub functionality to control other devices such as light and thermostat and send notifications from various sensors, a user interface configured to display animations and emotional expressions, and a camera for monitoring the surroundings (e.g., a home). This camera may be a high-definition (HD) camera for wide angle viewing that is separate from thesensor20.
An example robotic companion device may include the following hardware components:
    • A system-on-chip (SOC)/central processing unit (CPU) (e.g., SAMSUNG™ ARTIK™ 5) that runs the system and controls software and includes connectivity;
    • A camera, such as an 1080-pixel IP camera (e.g., OV2718) that provides multi-axis camera movement for streaming and security features;
    • A display screen such as a curved/flat display (e.g., a 2-inch screen, 360×480 resolution, 300 pixels per inch (PPI)) that displays notifications and animations;
    • A speaker (e.g., 2 watts) to play music and also play back text to speech response;
    • A motor (e.g., 4 brushless direct current (DC) gear motors) to drive movements;
    • An encoder/potentiometer (e.g., 4 rotary encoders or potentiometers) to precisely control movements;
    • A motor drive board to drive the motors;
    • A microphone array (e.g., 2-6 array microphone system with noise cancellation digital signal processing) to receive voice input and enable accurate and reliable far-field voice recognition;
    • A charging dock;
    • A gear box (e.g., 4 high-torque gear assemblies) that provides adequate torque and speed to provide smooth movements;
    • A Gimbal/Stewart platform that provides a desired range and degrees of motion for the interactive device;
    • A pinion (e.g., 2-pinion gears) that transfer and provide coupling with actuated parts;
    • A support shaft (e.g., 2 support shafts) that provide support and coupling with actuated parts;
    • A speaker mesh (e.g., 1 metal round hole mesh) that provides aesthetic and acoustic enhancements; and
    • A plastic shell body (e.g., 2 plastic outer shell pieces) as the exterior surface of the interactive device to provide protection.
FIG. 6 depicts anexemplary device10 in accordance with one embodiment. Thedevice10 as shown includes ahead11 and abody12. Thehead11 includes ahead shell13 and thetarget area20 that includes a user interface (UI). Thesensor30, which is a camera in this particular embodiment, is positioned behind and on the inside of thetarget area20. Themicrophone50 is positioned to the side of thetarget area20 to resemble “ears.” In this particular embodiment, thespeaker60 is positioned near thebody12. It should be understood that the components of theinteractive device10 may be arranged differently without deviating from the scope of this disclosure. It should also be understood that while the description focuses on an embodiment of thedevice10 that is a robotic companion, this is not a limitation and thedevice10 may be any electronic device.
FIG. 7A andFIG. 7B depict an exploded view and a perspective view, respectively, of components within thedevice10 in accordance with one embodiment. As shown, thedevice10 rests on abase300 for stability, and hasrollers302 that allows thebody12 to swivel. There are a plurality of stepper motors to enable movement of various parts: afirst stepper motor304 for head rotation, a set ofsecond stepper motors306 for head tilting, and athird stepper motor308 for body rotation.Geared neck sub-assembly310 andPCB sub-assembly312 are incorporated into thedevice10, as are a head tilt-control arm314 coupled to the headtilt control gear316.
FIG. 8A andFIG. 8B illustrate a rotation range of theexemplary device10 in accordance with one embodiment. This example embodiment includes abody12 that is configured to rotate about a y-axis with a total of 300 degrees of movement (+150 degrees to −150 degrees) while thebase300 and thehead11 remain in position. Thehead11 and thebody12 can be controlled separately.FIG. 8B illustrates another embodiment in which thehead11 rotates about a y-axis by a total of 100 degrees of movement (+50 degrees to −50 degrees) while thebody12 remains in position. It should be understood that both the body rotation depicted inFIG. 8A and the head rotation depicted inFIG. 8B may be combined into a single embodiment.
FIG. 9A andFIG. 9B illustrate a rotation range of theinteractive device10 in accordance with another embodiment. In the embodiment ofFIG. 9A, thehead11 is configured to rotate about a z-axis with a total of 50 degrees of movement (+25 degrees to −25 degrees). In the embodiment ofFIG. 9B, thehead11 is able to rotate about the x-axis as though thehead11 is tilting back and forth.
FIG. 10 depicts an exemplary block diagram of the system architecture in accordance with one embodiment. The system includes a main application process module350 that communicates with a motioncontrol process module360. The main application process350 includes abehavior tree module354, a natural language understanding (NLU) engine356, and a web real-time communications (webRTC) peer-to-peer (P2P)video streaming module358. Thebehavior tree module354 manages and coordinates all motor commands to create a desired display and a desired motor animation. The NLU engine356 processes speech input that includes performing signal enhancement, speech recognition, NLU, service integration, and text-to-speech (TTS) response. The webRTC P2Pvideo streaming module358 manages the video stream from the interactive device to various sources and companion applications.
The motioncontrol process module360 includes a proportional-integral-derivative (PID)controller364 and asensor hub366. The PID controller controls a plurality of motors (e.g., 4 motors) precisely using a feedback loop and uses analog positional encoders to accurately track motion. Thesensor hub366 provides sound source localization using energy estimation, and may be used to send other sensor events to the main application process module350.
FIG. 11 depicts an exemplary block diagram of NLU engine356 architecture in accordance with one embodiment. The NLU engine356 may provide signal enhancement by enhancing the accuracy and enabled far-field voice recognition. The NLU engine356 uses multiple microphone arrays to perform beam forming to identify the sound source, then uses the direction information of the sound source to cancel out noise from other directions. This function improves overall speech recognition accuracy.
The NLU engine356 may further provide speech recognition by converting the enhanced speech signal into text based on a well-defined corpus of training data to identify the right word and sentence compositions. The NLU engine356 may further provide NLU to map the recognized text to perform a desired action using NLU tools. The NLU tools can map different phrases and language constructs that imply the same intent to a desired action. For example, the NLU engine356 receives a voice message from a user, “What is the weather in San Jose?” The NLU engine356 provides NLU to the voice message to derive an intent “weather,” and intent parameter “San Jose,” and performs a desired action to fetch weather data for San Jose, e.g. from YAHOO™ Weather.
Once the NLU engine356 identifies the desired action based on the type of action, the system fetches data from different service/content providers. For example, the NLU engine356 provides service integration with a plurality of content providers such as a weather query from YAHOO Weather, a knowledge query from WOLFRAIVIALPHA®, a smart home query from SMARTTHINGS™ API, a news query from NPR™ news, and a sports query from STATS™. The present system formats the data so that a TTS engine uses the data to output a reply to the user via a speaker with a natural tone and speed. For example, the present system formats a data reply, “The weather in San Jose today is Sunny, with High of 54 and a Low of 43degrees” to output the data reply as an audio message via the speaker.
FIG. 12 depicts an exemplary block diagram of hardware architecture of thedevice10 in accordance with one embodiment.FIG. 12 is a more specific embodiment of what is depicted inFIG. 1, and shows theApplication processor40 as being in communication with the target area20 (which is a display device in this embodiment), the sensor30 (which is an HD camera in this embodiment), a microphone50 (which is part of a microphone array in this embodiment), andspeakers60. For the embodiment ofFIG. 12, theprocessor40 also communicates with a QuadChannel Motor driver70, which in turn controls aneck motor72, awaist motor74, aleft support motor76, and aright support motor78. Theprocessor40 may also communicate withencoders80 andZigbee radio85.
FIG. 13 depicts an exemplary block diagram of robotic architecture of the present device in accordance with one embodiment. As shown, the SOC host computer communicates with the controller to move different parts of thedevice10. A ServoEaser library may be used to smooth motor movements by giving acceleration effect.
FIG. 14 depicts an exemplary flow chart of performing a desired motion by thedevice10, in accordance with one embodiment. Thedevice10 includes a plurality of motion command application interfaces (APIs) to perform a respective desired action. For example, a motion command “B1, 30.2, 3” means theinteractive device10 performs a “neckForward” function to 30.2 degrees (relative angle) with a speed level of 3. In another example, a motion command “E1” means the interactive device performs a “happy1” function.FIG. 15 depicts an exemplary code sample for motion API.
FIG. 16 depicts an exemplary timing diagram for servo motor pulse width modulation (PWM) in accordance with one embodiment. The servo driver board has PID control to stabilize motor rotation. The real time angle values are selected using a potentiometer.
FIG. 17 depicts an exemplary block diagram of the present system in accordance with one embodiment. In this embodiment, thedevice10 provide security by monitoring users and activity within a boundary area (e.g., within a home), provides a connectivity to other devices and appliances, and provides direct interfacing for queries and tasks. For example, let's suppose the presentinteractive device10 receives a voice input from a user to pre-heat an oven to 180 degrees. Theinteractive device10 communicates with the oven device to turn on the oven at 180 degrees setting and further provides the user with an audio reply to confirm that the oven has been set to 180 degrees. Thedevice10 may further receive an acknowledgement message from the oven that that oven has reached 180 degrees so theinteractive device10 can send a second audio reply to the user to notify him that the oven has reached 180 degrees.
According to one embodiment, thedevice10 is further connected to one or more secondary devices to receive or provide information to the secondary device.FIG. 18 depicts an exemplary diagram of connecting thedevice10 to multiple secondary devices in accordance with one embodiment. Thedevice10 may be wirelessly connected to each secondary device via a Wi-Fi connection or a Bluetooth connection. The secondary device includes a video camera, a microphone array, and a speaker. For example, a video camera of the secondary device captures and detects a broken window. The secondary device sends the image of the broken window to thepresent device10 that may further transmit the image to the user's mobile device.
In accordance with one embodiment, thedevice10 provides a multi-modality display system to project visual content (e.g., a movie, information, a UI element) on areas with a different display mode.FIG. 19 depicts an exemplary diagram of a multi-modality display feature in thedevice10 in accordance with one embodiment. Thedevice10 may include an optical projector that is placed within thehead shell13 with the lens of the projector facing up. A curved projector screen may be installed on the internal curved surface of thehead shell13. In one embodiment, the head projector to either the curved projector screen on the internal curved surface of thehead shell13 or a surface (e.g., a wall surface) that is external to thedevice10. Thehead shell13 may include a transparent window portion so that the projector can project the visual content on an external surface (e.g., a wall) through the transparent window. In one embodiment, the multi-path optical guide assembly is a rotatable mirror. The optical guide assembly may direct light from the project to the curved projector to display various UI elements, e.g. eyes, and facial expressions. The optical guide assembly may direct light to the surface external to the present interactive device to display visual content such as information and media (e.g., a movie).
According to one embodiment, thedevice10 includes one or more sensors to determine whether to project visual content on the curved projector screen or to the wall based on various decision factors including but not limited to user distance, a type of visual content (e.g., a movie), and a specified usage parameter. For example, if the present interactive device detects a user who is relatively close based on the user being within a predefined threshold radius, the presentinteractive device10 displays the visual content on the curved projector screen. In another example, if the type of visual content is a movie, thepresent device10 displays the visual content on the wall. Thedevice10 may further determine a mode and a resolution of the projector based on the type of visual content, a proximity to a projection surface, and an optical property of the projector.
In another example, if the camera of thedevice10 detects that the amount of light in a room is too much (too bright) compared to a threshold reference value, thedevice10 displays the visual content on the curved projector screen. The rotation of the optical guide assembly may be implemented by a set of action mechanism and control circuits. To give a smooth display mode transition, the projector may be dimmed out when the mirror is rotating.
According to another embodiment, thedevice10 provides feedback in response to a voice input by a user to establish that it is engaged for human voice interaction. The feedback includes one or more visual feedback, audio feedback, and movement feedback. For example, when a user provides a trigger voice command such as “Hello,” thedevice10 may tilt itshead shell13 to one side to simulate listening, display wide open eyes on a UI on the head shell as a listening facial expression, and provide a voice feedback. The movement feedback may include raising the head shell and turning thehead shell13 in the direction of the sound source. According to one embodiment, thedevice10 includes a 4-degree of freedom (DOF) mechanical structure design.
As mentioned above, according to another embodiment, the sensor30 (e.g., the camera) in thedevice10 detects that a user is coming closer. Thedevice10 allows thesensor30 to further locate the eye of the user and estimate the visual attention of the user. If thedevice10 determines that the user has sufficient visual contact, thedevice10 triggers the voice input engine and waits for a voice command from the user. According to one embodiment, thedevice10 includes asensor30 and amicrophone array50 to detect a particular user.
According to yet another embodiment, thedevice10 receives a natural gesture input and provides a feedback to the gesture input. Table 1 illustrates various gestures and their associated meaning, and the corresponding feedback from thedevice10.
TABLE 1
GestureMeaningReaction fromDevice 10
Index finger of one hand is extended andRequest for silenceMutes, or stops moving
placed vertically in front of lips, with the
remaining fingers curled toward the palm
with the thumb forming a fist
Connect thumb and forefinger in a circleOkayAccepts user commands
and hold other fingers up straightor executes instructions
Index finger sticking out of the clenchedBeckoningTurns to focus on action
fist, palm facing the gesturer. The fingerissuer from others
moves repeated towards the gesturer (in a
hook) as though to draw something nearer
Natural number one through tenNumber gesturesInputs number or number-
related commands
Raise one hand and then slap handsHigh fiveGreeting, congratulations
togetheror celebration
Draw circle and triangleAlarm stateCommunicates with
emergency provider
According to one embodiment, thedevice10 provides multi-user behavior and pattern recognition. Thedevice10 understands group behavior and individual preferences of each user based on interaction with each user. Thedevice10 provides a heuristic method to automatically learn by logging the time of day of interaction, duration of interaction, and a user identifier to determine the user's intrinsic pattern. Thedevice10 may further analyze group interactions between multiple users using a camera, to understand group structure and hierarchy. For example, thedevice10 may classify a group of users sitting at a table as a family having dinner, which is then correlated with other logs such as a time of day and the number of people detected. This allows thedevice10 to determine an average time of day that the family has dinner so that the device can provide information and services such as nutrition information, take-out service, recipes, etc.
In another embodiment, thedevice10 determines that a user has interest in sports based on various factors such as detecting a type of sportswear on the user, and a frequency of voice input from the user associated with a particular sport. Thedevice10 may then provide sports information to the user, such as special events or calendar events.
According to one embodiment, thedevice10 receives haptic and tactile interactions from a user to adjust a behavior, add a feature, control, and convey a message. For example, a user taps thehead shell13 of thedevice13 to convey happiness or satisfaction. Thedevice10 detects the tap on thehead shell13 and changes its movement, animation, and its vocal response to the interaction. According to one embodiment, thedevice10 provides emotion detection using voice, images, and sound to identify a user's emotional state. Thedevice10 may provide a behavior change based on a detected type of music. For example, the speaker of thedevice10 provides a surfer-like voice when surf rock music is playing, and the of the present interactive device displays animations associated with the surf rock genre.
According to one embodiment, thedevice10 synchronizes expression, movements, and output responses for multimodal interaction. Thedevice10 provides various techniques to ensure that each modality of output is synchronized to create the proper effect needed to create a natural interaction with the user. The techniques include buffered query response and preemptive motion cues. Thedevice10 synchronizes and coordinates functions of all the output modalities so that the final actuation is as natural as possible. For example, if the TTS engine's response from the server is slow, thedevice10 includes a controller mechanism that automatically determines that time is required and starts an idle animation on the UI and a synchronized movement that shows a natural waiting behavior.
According to one embodiment, thedevice10 provides automatic security profile creation and notification. Thedevice10 includes a high definition camera, a microphone array, actuators, and speakers to automatically determine and learn the security status of a location based on past history and trigger words. For example, thedevice10 can learn that a desired word (e.g., help, danger) or loud noises (e.g., a sound above a predefined decibel threshold) are indicators for investigation, and switches into a tracking mode. This allows thedevice10 to track the source of the sound/behavior and monitor the source. Thedevice10 may further analyze the voice signature to detect stress or mood.
Thedevice10 further includes a computing module to provide accurate and precise coordination between the computing module and the actuators. The camera and microphone in conjunction with the computing module identifies a position, direction, and a video stream of the area of interest and synchronizes with the actuating motors to keep track of the area of interest. Thedevice10 dynamically determines a point of interest to track, where the point of interest may be a sound or a specific action in the camera feed. According to one embodiment, thedevice10 dynamically selects a desired modality of sensing. For example, the camera of thedevice10 captures a dog barking since the dog is producing a loud uncommon noise and an unusual person moving through the home quietly. Although they are both analogous behavior, thedevice10 dynamically determines that the camera tracks the unusual person rather than the sound emanating from the barking.
According to one embodiment, thedevice10 provides machine-learning based sound source separation and characterization using an actuated microphone array. Sound source separation and acoustic scene analysis involves being able to distinguish different sound sources within a particular acoustic environment. Thedevice10 uses the microphone array that can be actuated based on a combination of beam forming and blind source separation techniques to identify the approximate location of different sound sources and then determine their general category type based on the supervised machine-learning model.
The actuated microphone array allows thedevice10 to create a dynamic acoustic model of the environment. Thedevice10 updates the acoustic model and feeds data from the acoustic model into a blind source separation model that determines and learns different sound sources within the environment after a period of time. For example, thedevice10 detects that there is a consistent buzz everyday at a specific time of the day. Thedevice10 has a trained model having common acoustic signals for common household noises (e.g., a sound of a blender running). Thedevice10 uses the trained model to determine and identify that the consistent buzz is potentially the sound of a blender. Thedevice10 can use the identified blender sound to create an acoustic map of the surrounding environment. Thedevice10 can associate the identified blender sound with a kitchen location. Thus, thedevice10 can determine a geographical location of the kitchen based on the direction of the blender sound (using beam forming and localizing the blender sound). Thedevice10 may further analyze other sound sources within the surrounding environment to infer other sound sources and their respective locations; for example, a TV is associated with a living room and an air vent is associated with a ceiling. This allows better noise canceling and acoustic echo cancellation, and further enables thedevice10 to create a model of the surrounding environment to facilitate other tasks carried out by thedevice10.
In one embodiment, thedevice10 detects a blender sound and does not identify the blender sound, thedevice10 prompts the user to respond and identify the blender sound. The user may respond with a voice input that identifies the sound, for example “a blender.” Thedevice10 receives the voice input, identifies the voice input as “blender” word, associates word with the blender sound, and stores this association.
According to one embodiment, thedevice10 provides automatic kinematic movement and behavior creation based on manipulation of thedevice10 by a user. This allows thedevice10 to create a new actuated motion or a behavior. The user may begin the creation of a new motion behavior by setting thedevice10 to a learning mode. Once the learning mode is initiated, the user moves an actuated part of thedevice10 to a desired location at a desired speed (as if controlling thedevice10 by hand, this may be either a single pose or a combination of different poses to create a behavior sequences). The user assigns a name to the behavior and identifies one or more key frames. Thedevice10 registers the behavior, and can execute the motion or poses associated with the behavior automatically.
According to one embodiment, thedevice10 further provides inferred pose estimation of the robot based on a visual cue. A user may provide thedevice10 with a movement/behavior by articulating a movement with a similar degree of freedom as thedevice10. Thedevice10 captures the movement with an inbuilt camera, analyzes the captured movement, automatically infers the movement, and determines a method of achieving the movement using its actuation mechanism. For example, thedevice10 captures a video feed of a pose performed by a user. Thedevice10 analyzes the video feed of the pose, and determines the specific poses, angle, and speed at which the actuating motors need to be triggered to create a closest approximation of the pose. According to one embodiment, thedevice10 learns language based on voice, face, and lip recognition.
The present disclosure may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. In this disclosure, example embodiments are described in detail with reference to the accompanying drawings, in which like reference numbers refer to like elements throughout. The present disclosure, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments herein. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the aspects and features of the present disclosure to those skilled in the art. Accordingly, processes, elements, and techniques that are not necessary to those having ordinary skill in the art for a complete understanding of the aspects and features of the present disclosure may not be described. In the drawings, the relative sizes of elements, layers, and regions may be exaggerated for clarity.
It will be understood that, although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section described below could be termed a second element, component, region, layer or section, without departing from the scope of the present disclosure.
The electronic devices or any other relevant devices or components according to embodiments of the present disclosure described herein may be implemented utilizing any suitable hardware, firmware (e.g., an application-specific integrated circuit), software, or a combination of software, firmware, and hardware. For example, the various components of these devices may be formed on one integrated circuit (IC) chip or on separate IC chips. Further, the various components of these devices may be implemented on a flexible printed circuit film, a tape carrier package (TCP), a printed circuit board (PCB), or formed on one substrate. Further, the various components of these devices may be a process or thread, running on one or more processors, in one or more computing devices, executing computer program instructions and interacting with other system components for performing the various functionalities described herein. The computer program instructions are stored in a memory which may be implemented in a computing device using a standard memory device, such as, for example, a random access memory (RAM). The computer program instructions may also be stored in other non-transitory computer readable media such as, for example, a CD-ROM, flash drive, or the like. Also, a person of skill in the art should recognize that the functionality of various computing devices may be combined or integrated into a single computing device, or the functionality of a particular computing device may be distributed across one or more other computing devices without departing from the spirit and scope of the exemplary embodiments of the present disclosure.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and/or the present specification, and should not be interpreted in an idealized or overly formal sense, unless expressly so defined herein.
Some portions of the above descriptions are presented in terms of algorithms and/or symbolic representations of operations on data bits that may occur within a computer/server memory. These descriptions and representations are used by those skilled in the art of data compression to convey ideas, structures, and methodologies to others skilled in the art. An algorithm is a self-consistent sequence for achieving a desired result and requiring physical manipulations of physical quantities, which may take the form of electro-magnetic signals capable of being stored, transferred, combined, compared, replicated, reproduced, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. These and similar terms are associated with appropriate physical quantities, and are used as representative labels for these quantities. Accordingly, terms such as “processing,” “computing,” “calculating,” “determining,” “displaying” or the like, refer to the action and processes of a computing device or system that manipulates data represented as physical quantities within registers/memories into other data that is also represented by stored/transmitted/displayed physical quantities.
While the embodiments are described in terms of a method or technique, it should be understood that aspects of the disclosure may also cover an article of manufacture that includes a non-transitory computer readable medium on which computer-readable instructions for carrying out embodiments of the method are stored. The computer readable medium may include, for example, semiconductor, magnetic, opto-magnetic, optical, or other forms of computer readable medium for storing computer readable code. Further, the disclosure may also cover apparatuses for practicing embodiments of the system and method disclosed herein. Such apparatus may include circuits, dedicated and/or programmable, to carry out operations pertaining to embodiments.
Examples of such apparatus include a general purpose computer and/or a dedicated computing device when appropriately programmed and may include a combination of a computer/computing device and dedicated/programmable hardware circuits (such as electrical, mechanical, and/or optical circuits) adapted for the various operations pertaining to the embodiments.

Claims (14)

What is claimed is:
1. A device for interacting with a user, comprising:
a head connected to a body, the head comprising a shell and a target area formed as part of an outer surface of the shell, the target area comprising a display device, the head being configured to rotate about an axis with the body remaining in position;
a sensor formed in the shell of the head and being coupled to the target area, wherein the sensor detects whether a human is present in a predefined proximity region around the target area and in response to the detecting that a human is present in the predefined proximity region around the target area, detects a direction of visual attention on the target area given by the human in the predefined proximity region;
a processor coupled to the sensor and making a determination that the user's visual attention is in a direction of the target area for a minimum visual contact period, wherein the processor is configured to determine the user's position in the predefined proximity region and rotate the head so that the target area is in a predefined orientation with respect to the user, and wherein the processor is configured to determine the user's visual attention is in the direction of the target area by locating the user's eye and determining, based on movement of the eye, that the user is looking at the target area of the device for the minimum visual contact period; and
an input engine that is activated based on the determination, wherein based on the determination, an image is shown by the display device on the target area to indicate the input engine is activated.
2. The device ofclaim 1 further comprising a microphone, wherein the input engine is a voice input engine that listens for a voice command from the user upon activation.
3. The device ofclaim 1, wherein the sensor comprises a camera.
4. The device ofclaim 1, wherein the sensor is positioned behind the target area.
5. The device ofclaim 1 further comprising another sensor positioned in a different part of the device.
6. The device ofclaim 1 further comprising an output mechanism for signaling to the user that the input engine is activated.
7. The device ofclaim 6, wherein the output mechanism comprises at least one of a speaker, a movable hardware part, and the display device.
8. The device ofclaim 1 further comprising a motor driver coupled to the processor to move hardware parts, wherein the motor driver and the processor are enclosed in the shell.
9. A method of transitioning an input engine between sleep mode and interactive mode, comprising:
detecting a user is present in a predefined proximity region of a target area on a device, the device comprising a head connected to a body and the head comprising a shell, the target area being formed as part of an outer surface of the shell and comprising a display device, the head being configured to rotate about an axis with the body remaining in position;
determining the user's position in the predefined proximity region and rotating the target area to be in a predefined orientation with respect to the user;
in response to the detecting that the user is present in the predefined proximity region of the target area, identifying a user eye of the user present in the predefined proximity region of the target area;
determining a direction of user's visual attention based on movement of the eye to determine that the user is looking at the target area of the device;
activating the input engine to receive input in response to determining the visual attention is on the target area in a predefined direction for a minimum visual contact period; and
in response to the activating, providing an image displayed by the display device on the target area to indicate the input engine is activated.
10. The method ofclaim 9 further comprising generating a signal that the input engine is in interactive mode upon transitioning from the sleep mode to the interactive mode.
11. The method ofclaim 10, wherein the signal is one or more of an audio signal, a visual display, and a hardware movement.
12. The method ofclaim 9 further comprising de-activating the input engine in response to receiving no input for a predetermined time duration.
13. The method ofclaim 9 further comprising de-activating the input engine in response to determining that the user is more than a predefined distance away from the target area.
14. The method ofclaim 9 further comprising de-activating the input engine in response to receiving a trigger word.
US15/349,8992016-02-182016-11-11Initiating human-machine interaction based on visual attentionExpired - Fee RelatedUS10129510B2 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US15/349,899US10129510B2 (en)2016-02-182016-11-11Initiating human-machine interaction based on visual attention
KR1020170022366AKR20170097585A (en)2016-02-182017-02-20Initiating human-machine interaction based on visual attention

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US201662297076P2016-02-182016-02-18
US15/349,899US10129510B2 (en)2016-02-182016-11-11Initiating human-machine interaction based on visual attention

Publications (2)

Publication NumberPublication Date
US20170242478A1 US20170242478A1 (en)2017-08-24
US10129510B2true US10129510B2 (en)2018-11-13

Family

ID=59629595

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US15/349,899Expired - Fee RelatedUS10129510B2 (en)2016-02-182016-11-11Initiating human-machine interaction based on visual attention
US15/353,578ActiveUS10321104B2 (en)2016-02-182016-11-16Multi-modal projection display

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US15/353,578ActiveUS10321104B2 (en)2016-02-182016-11-16Multi-modal projection display

Country Status (2)

CountryLink
US (2)US10129510B2 (en)
KR (2)KR20170097581A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11289086B2 (en)*2019-11-012022-03-29Microsoft Technology Licensing, LlcSelective response rendering for virtual assistants
US11551442B2 (en)*2017-12-222023-01-10Nokia Technologies OyApparatus, method and system for identifying a target object from a plurality of objects
US11620855B2 (en)2020-09-032023-04-04International Business Machines CorporationIterative memory mapping operations in smart lens/augmented glasses
EP4492136A4 (en)*2022-05-092025-07-16Samsung Electronics Co Ltd ELECTRONIC DEVICE WITH SLIDING PROJECTOR AND CONTROL METHOD THEREFOR

Families Citing this family (84)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9318108B2 (en)2010-01-182016-04-19Apple Inc.Intelligent automated assistant
US8977255B2 (en)2007-04-032015-03-10Apple Inc.Method and system for operating a multi-function portable electronic device using voice-activation
US8676904B2 (en)2008-10-022014-03-18Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US10417037B2 (en)2012-05-152019-09-17Apple Inc.Systems and methods for integrating third party services with a digital assistant
DE212014000045U1 (en)2013-02-072015-09-24Apple Inc. Voice trigger for a digital assistant
DE112014002747T5 (en)2013-06-092016-03-03Apple Inc. Apparatus, method and graphical user interface for enabling conversation persistence over two or more instances of a digital assistant
US9715875B2 (en)2014-05-302017-07-25Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
US10170123B2 (en)2014-05-302019-01-01Apple Inc.Intelligent assistant for home automation
US9338493B2 (en)2014-06-302016-05-10Apple Inc.Intelligent automated assistant for TV user interactions
US9886953B2 (en)2015-03-082018-02-06Apple Inc.Virtual assistant activation
US10460227B2 (en)2015-05-152019-10-29Apple Inc.Virtual assistant in a communication session
US10331312B2 (en)2015-09-082019-06-25Apple Inc.Intelligent automated assistant in a media environment
US10747498B2 (en)2015-09-082020-08-18Apple Inc.Zero latency digital assistant
US10671428B2 (en)2015-09-082020-06-02Apple Inc.Distributed personal assistant
US11587559B2 (en)2015-09-302023-02-21Apple Inc.Intelligent device identification
US10691473B2 (en)2015-11-062020-06-23Apple Inc.Intelligent automated assistant in a messaging environment
US10230522B1 (en)2016-03-242019-03-12Amazon Technologies, Inc.Network access control
US12223282B2 (en)2016-06-092025-02-11Apple Inc.Intelligent automated assistant in a home environment
US10586535B2 (en)2016-06-102020-03-10Apple Inc.Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en)2016-06-112018-01-08Apple IncApplication integration with a digital assistant
US12197817B2 (en)2016-06-112025-01-14Apple Inc.Intelligent device arbitration and control
US10103699B2 (en)*2016-09-302018-10-16Lenovo (Singapore) Pte. Ltd.Automatically adjusting a volume of a speaker of a device based on an amplitude of voice input to the device
JP6515899B2 (en)*2016-10-042019-05-22トヨタ自動車株式会社 Voice interactive apparatus and control method thereof
US11204787B2 (en)2017-01-092021-12-21Apple Inc.Application integration with a digital assistant
JP6751536B2 (en)*2017-03-082020-09-09パナソニック株式会社 Equipment, robots, methods, and programs
JP2018159887A (en)*2017-03-242018-10-11富士ゼロックス株式会社Display device
DK180048B1 (en)2017-05-112020-02-04Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK201770427A1 (en)2017-05-122018-12-20Apple Inc.Low-latency intelligent automated assistant
DK179496B1 (en)2017-05-122019-01-15Apple Inc. USER-SPECIFIC Acoustic Models
DK201770411A1 (en)2017-05-152018-12-20Apple Inc. MULTI-MODAL INTERFACES
DK179549B1 (en)2017-05-162019-02-12Apple Inc.Far-field extension for digital assistant services
US10303715B2 (en)2017-05-162019-05-28Apple Inc.Intelligent automated assistant for media exploration
GB2565315B (en)*2017-08-092022-05-04Emotech LtdRobots, methods, computer programs, computer-readable media, arrays of microphones and controllers
KR20190024190A (en)*2017-08-312019-03-08(주)휴맥스Voice recognition image feedback providing system and method
US10155166B1 (en)2017-09-082018-12-18Sony Interactive Entertainment Inc.Spatially and user aware second screen projection from a companion robot or device
EP3681678A4 (en)*2017-09-182020-11-18Samsung Electronics Co., Ltd. METHOD OF DYNAMIC INTERACTION AND ELECTRONIC DEVICE THEREFORE
CN107566874A (en)*2017-09-222018-01-09百度在线网络技术(北京)有限公司Far field speech control system based on television equipment
CN109767774A (en)*2017-11-082019-05-17阿里巴巴集团控股有限公司A kind of exchange method and equipment
KR102530391B1 (en)2018-01-252023-05-09삼성전자주식회사Application processor including low power voice trigger system with external interrupt, electronic device including the same and method of operating the same
EP3525456A1 (en)*2018-02-122019-08-14Rabin EsrailSelf-adjusting portable modular 360-degree projection and recording computer system
US10818288B2 (en)2018-03-262020-10-27Apple Inc.Natural assistant interaction
US10939202B2 (en)*2018-04-052021-03-02Holger StoltzeControlling the direction of a microphone array beam in a video conferencing system
US10928918B2 (en)2018-05-072021-02-23Apple Inc.Raise to speak
US11145294B2 (en)2018-05-072021-10-12Apple Inc.Intelligent automated assistant for delivering content from user experiences
DK201870355A1 (en)2018-06-012019-12-16Apple Inc.Virtual assistant operation in multi-device environments
DK180639B1 (en)2018-06-012021-11-04Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US11501781B2 (en)*2018-06-052022-11-15Samsung Electronics Co., Ltd.Methods and systems for passive wakeup of a user interaction device
KR102148029B1 (en)*2018-06-252020-08-26엘지전자 주식회사Robot
KR102148031B1 (en)2018-06-252020-10-14엘지전자 주식회사Robot
KR102148032B1 (en)*2018-06-252020-08-26엘지전자 주식회사Robot
KR102165352B1 (en)2018-06-252020-10-13엘지전자 주식회사Robot
KR102136411B1 (en)*2018-06-252020-07-21엘지전자 주식회사Robot
US10484770B1 (en)2018-06-262019-11-19Amazon Technologies, Inc.Display device with transverse planar microphone arrays
JP6800183B2 (en)*2018-07-132020-12-16本田技研工業株式会社 Communication device
US11462215B2 (en)2018-09-282022-10-04Apple Inc.Multi-modal inputs for voice commands
JP7084848B2 (en)*2018-11-062022-06-15本田技研工業株式会社 Control equipment, agent equipment and programs
JP2020077135A (en)*2018-11-062020-05-21本田技研工業株式会社 Control device, agent device and program
KR102673293B1 (en)*2018-11-082024-06-11현대자동차주식회사Service robot and method for operating thereof
JP7053432B2 (en)*2018-11-142022-04-12本田技研工業株式会社 Control equipment, agent equipment and programs
CN111258158B (en)*2018-11-302022-10-25中强光电股份有限公司 Projector and Brightness Adjustment Method
CN111258157B (en)2018-11-302023-01-10中强光电股份有限公司Projector and brightness adjusting method
JP2020091636A (en)*2018-12-052020-06-11トヨタ自動車株式会社 Control method for voice interaction device
US11316144B1 (en)2018-12-132022-04-26Amazon Technologies, Inc.Lithium-ion batteries with solid electrolyte membranes
US11348573B2 (en)2019-03-182022-05-31Apple Inc.Multimodality in digital assistant systems
DK201970509A1 (en)2019-05-062021-01-15Apple IncSpoken notifications
US11307752B2 (en)2019-05-062022-04-19Apple Inc.User configurable task triggers
US11140099B2 (en)2019-05-212021-10-05Apple Inc.Providing message response suggestions
WO2020241951A1 (en)2019-05-312020-12-03엘지전자 주식회사Artificial intelligence learning method and robot operation method using same
US11227599B2 (en)*2019-06-012022-01-18Apple Inc.Methods and user interfaces for voice-based control of electronic devices
EP3771595B1 (en)2019-07-302025-04-30Coretronic CorporationProjection device and control method thereof, vehicle comprising the same
KR102134860B1 (en)*2019-09-232020-08-27(주)제노임펙트Artificial Intelligence speaker and method for activating action based on non-verbal element
US11145315B2 (en)*2019-10-162021-10-12Motorola Mobility LlcElectronic device with trigger phrase bypass and corresponding systems and methods
CN110992940B (en)*2019-11-252021-06-15百度在线网络技术(北京)有限公司Voice interaction method, device, equipment and computer-readable storage medium
US11076225B2 (en)*2019-12-282021-07-27Intel CorporationHaptics and microphone display integration
US11183193B1 (en)2020-05-112021-11-23Apple Inc.Digital assistant hardware abstraction
US12301635B2 (en)2020-05-112025-05-13Apple Inc.Digital assistant hardware abstraction
US11061543B1 (en)2020-05-112021-07-13Apple Inc.Providing relevant data items based on context
US11490204B2 (en)2020-07-202022-11-01Apple Inc.Multi-device audio adjustment coordination
US11438683B2 (en)2020-07-212022-09-06Apple Inc.User identification using headphones
CN112506353A (en)*2021-01-082021-03-16蔚来汽车科技(安徽)有限公司Vehicle interaction system, method, storage medium and vehicle
CN115220293B (en)*2021-04-142025-03-11中强光电股份有限公司 Projection device
JP7735633B2 (en)*2021-04-142025-09-09中強光電股▲ふん▼有限公司 projection device
US12021806B1 (en)2021-09-212024-06-25Apple Inc.Intelligent message delivery
US12003660B2 (en)2021-12-312024-06-04Avila Technology, LLCMethod and system to implement secure real time communications (SRTC) between WebRTC and the internet of things (IoT)

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070237516A1 (en)*2006-03-282007-10-11Canon Kabushiki KaishaImage pickup device
US8292433B2 (en)2003-03-212012-10-23Queen's University At KingstonMethod and apparatus for communication between humans and devices
US20130304479A1 (en)2012-05-082013-11-14Google Inc.Sustained Eye Gaze for Determining Intent to Interact
US20140145935A1 (en)2012-11-272014-05-29Sebastian SztukSystems and methods of eye tracking control on mobile device
US20140168056A1 (en)2012-12-192014-06-19Qualcomm IncorporatedEnabling augmented reality using eye gaze tracking
US20140310256A1 (en)2011-10-282014-10-16Tobii Technology AbMethod and system for user initiated query searches based on gaze data
US9110635B2 (en)2013-12-032015-08-18Lenova (Singapore) Pte. Ltd.Initiating personal assistant application based on eye tracking and gestures
US20160062459A1 (en)2014-05-092016-03-03Eyefluence, Inc.Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US9311527B1 (en)2011-07-142016-04-12The Research Foundation For The State University Of New YorkReal time eye tracking for human computer interaction
US20160154460A1 (en)2016-02-062016-06-02Maximilian Ralph Peter von LiechtensteinGaze Initiated Interaction Technique

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010046034A1 (en)*2000-02-182001-11-29Gold Robert J.Machine for creating handheld illumination and projectable multimedia presentations
US20020113912A1 (en)*2000-11-202002-08-22Haviland WrightDual model near-eye and projection display system
DE10357726B3 (en)2003-12-102005-08-11Siemens AgDisplay device, especially for mobile telephone, has mirror and lens arrangement enabling laser light to be deflected to display surface or projection surface
US7134756B2 (en)*2004-05-042006-11-14Microsoft CorporationSelectable projector and imaging modes of display table
EP1793600B1 (en)*2004-09-212011-11-02Nikon CorporationProjector device, mobile telephone, and camera
DE102005049825A1 (en)2005-10-182007-04-19Benq Mobile Gmbh & Co. OhgMobile communication terminal has display with laser projector and back projection screen illuminated via rotating and fixed mirrors giving remote projection option
US8042949B2 (en)*2008-05-022011-10-25Microsoft CorporationProjection of images onto tangible user interfaces
KR101537596B1 (en)*2008-10-152015-07-20엘지전자 주식회사Mobile terminal and method for recognizing touch thereof
EP2421620A4 (en)2009-04-242015-06-17Unisen Inc Dba Star TracFitness product projection display assembly
JP5601083B2 (en)*2010-08-162014-10-08ソニー株式会社 Information processing apparatus, information processing method, and program
JP2012165359A (en)*2010-10-132012-08-30Act Research CorpProjection display system and method with multiple, convertible display modes
CN102707557A (en)2011-03-282012-10-03纳赛诺科技(句容)有限公司Micro projection device being embedded in mobile phone
US10215583B2 (en)*2013-03-152019-02-26Honda Motor Co., Ltd.Multi-level navigation monitoring and control
JPWO2015098190A1 (en)*2013-12-272017-03-23ソニー株式会社 Control device, control method, and computer program
CN106030495B (en)*2015-01-302021-04-13索尼深度传感解决方案股份有限公司 Multimodal gesture-based interaction system and method utilizing a single sensing system
US20160292921A1 (en)*2015-04-032016-10-06Avegant CorporationSystem, apparatus, and method for displaying an image using light of varying intensities

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8292433B2 (en)2003-03-212012-10-23Queen's University At KingstonMethod and apparatus for communication between humans and devices
US20070237516A1 (en)*2006-03-282007-10-11Canon Kabushiki KaishaImage pickup device
US9311527B1 (en)2011-07-142016-04-12The Research Foundation For The State University Of New YorkReal time eye tracking for human computer interaction
US20140310256A1 (en)2011-10-282014-10-16Tobii Technology AbMethod and system for user initiated query searches based on gaze data
US20130304479A1 (en)2012-05-082013-11-14Google Inc.Sustained Eye Gaze for Determining Intent to Interact
US20140145935A1 (en)2012-11-272014-05-29Sebastian SztukSystems and methods of eye tracking control on mobile device
US20140168056A1 (en)2012-12-192014-06-19Qualcomm IncorporatedEnabling augmented reality using eye gaze tracking
US9110635B2 (en)2013-12-032015-08-18Lenova (Singapore) Pte. Ltd.Initiating personal assistant application based on eye tracking and gestures
US20160062459A1 (en)2014-05-092016-03-03Eyefluence, Inc.Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US20160154460A1 (en)2016-02-062016-06-02Maximilian Ralph Peter von LiechtensteinGaze Initiated Interaction Technique

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11551442B2 (en)*2017-12-222023-01-10Nokia Technologies OyApparatus, method and system for identifying a target object from a plurality of objects
US11289086B2 (en)*2019-11-012022-03-29Microsoft Technology Licensing, LlcSelective response rendering for virtual assistants
US11620855B2 (en)2020-09-032023-04-04International Business Machines CorporationIterative memory mapping operations in smart lens/augmented glasses
EP4492136A4 (en)*2022-05-092025-07-16Samsung Electronics Co Ltd ELECTRONIC DEVICE WITH SLIDING PROJECTOR AND CONTROL METHOD THEREFOR

Also Published As

Publication numberPublication date
US20170242478A1 (en)2017-08-24
US20170244942A1 (en)2017-08-24
KR20170097581A (en)2017-08-28
KR20170097585A (en)2017-08-28
US10321104B2 (en)2019-06-11

Similar Documents

PublicationPublication DateTitle
US10129510B2 (en)Initiating human-machine interaction based on visual attention
US12278932B2 (en)Methods and apparatus to assist listeners in distinguishing between electronically generated binaural sound and physical environment sound
US11017217B2 (en)System and method for controlling appliances using motion gestures
CN112352209B (en)System and method for interacting with an artificial intelligence system and interface
US12321666B2 (en)Methods for quick message response and dictation in a three-dimensional environment
JP7622956B2 (en) Devices and programs
JP2021193572A (en) Device control using gaze information
US8700392B1 (en)Speech-inclusive device interfaces
KR20180129886A (en) Persistent companion device configuration and deployment platform
JP2023534901A (en) Assistant device mediation using wearable device data
US12405703B2 (en)Digital assistant interactions in extended reality
US20240177424A1 (en)Digital assistant object placement
WO2020021861A1 (en)Information processing device, information processing system, information processing method, and information processing program
CN111491212A (en) Video processing method and electronic device
CN114779924A (en)Head-mounted display device, method for controlling household device and storage medium
WO2025188634A1 (en)Techniques for capturing media
WO2025072373A1 (en)User interfaces and techniques for moving a computer system
CN117836741A (en) Digital Assistant Object Placement
JP2021119642A (en) Information processing equipment, information processing method, and recording medium

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MA, TAO;REEL/FRAME:040292/0826

Effective date:20161111

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FEPPFee payment procedure

Free format text:MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPSLapse for failure to pay maintenance fees

Free format text:PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20221113


[8]ページ先頭

©2009-2025 Movatter.jp