Movatterモバイル変換


[0]ホーム

URL:


US10475464B2 - Method and apparatus for connecting service between user devices using voice - Google Patents

Method and apparatus for connecting service between user devices using voice
Download PDF

Info

Publication number
US10475464B2
US10475464B2US15/797,910US201715797910AUS10475464B2US 10475464 B2US10475464 B2US 10475464B2US 201715797910 AUS201715797910 AUS 201715797910AUS 10475464 B2US10475464 B2US 10475464B2
Authority
US
United States
Prior art keywords
information
voice
service
user
connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/797,910
Other versions
US20180047406A1 (en
Inventor
Sehwan Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co LtdfiledCriticalSamsung Electronics Co Ltd
Priority to US15/797,910priorityCriticalpatent/US10475464B2/en
Publication of US20180047406A1publicationCriticalpatent/US20180047406A1/en
Application grantedgrantedCritical
Publication of US10475464B2publicationCriticalpatent/US10475464B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method of connecting a service between a device and at least one other device is provided. The method includes recording, by the device, a user voice input in a state where a voice command button has been input, outputting first information based on the recorded user voice when an input of the voice command button is cancelled, receiving, by the device, second information corresponding to the first information, recognizing a service type according to the first information and the second information, connecting the device to a subject device in an operation mode of the device determined according to the recognized service type, and performing a service with the connected subject device.

Description

PRIORITY
This application is a Continuation of, and claims priority under 35 U.S.C′. § 120 to, U.S. patent application Ser. No. 13/934,839, which was filed on Jul. 3, 2013, issued on Oct. 31, 2017 as U.S. Pat. No. 9,805,733, and claimed priority under 35 U.S.C. § 119(a) to a Korean patent application filed on Jul. 3, 2012 in the Korean Intellectual Property Office and assigned Serial No. 10-2012-0072290, the entire contents of all of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates generally to a method and apparatus for connecting a service between user devices, and more particularly, to a method and apparatus for connecting a service between user devices according to a user voice input through the user devices.
2. Description of the Related Art
Along with a recent development of digital technologies, various portable user devices capable of performing communication and processing personal information, such as mobile communication terminals, Personal Digital Assistants (PDAs), electronic organizers, smart phones, tablet Personal Computers (PCs), etc., are being introduced. Such user devices are not required to remain in their respective traditional unique usage areas, but are reaching a mobile convergence phase covering usage areas of other terminals. For example, a user device may include various functions for performing voice calls, video calls, text message transmission, such as a Short Message Service (SMS) and a Multimedia Message Service (MMS) message transmission, electronic organizer functions, photography, e-mail transmission and reception, broadcast replay, video replay, Internet, electronic commerce, music replay, schedule management, Social Networking Service (SNS), a friend search service, a messenger, a dictionary, a game, and a Wireless Local Area Network (WLAN) link.
In particular, along with the development of wireless communication technologies, various wireless connections that may be used in providing a link service between user devices are being developed and applied. For example, wireless connection technologies for supporting a link service between user devices, such as a Bluetooth, Zighee and Ultra-Wideband (UWB), are being developed.
In order to use such wireless connection technologies, complicated processes, such as a search selection and authentication of a peripheral device, connection to the peripheral device, selection of data to be shared, and selection of transmission of the selected data, are required.
For example, in order to connect a service between user devices, one of the user devices is first operated as a master device (i.e., a master), in a state where user devices to be connected to each other recognize each others' addresses, and thereafter another user device is operated as a slave device a slave), so as to perform a connection request to the already executed master. Further, the master performs a connection by checking a separate preset code to identify whether the slave is a slave that intends to be connected to the master.
Likewise, when intending to use a service connection function between existing user devices, many user interactions are required to perform necessary authentication and service connection, and thus the service connection between the user devices becomes inconvenient to users. Therefore, there is a need for a service for improving user convenience by simplifying complicated pairing procedure of various wireless connections, in order to perform a link service between user devices.
SUMMARY OF THE INVENTION
The present invention has been made in order to address at least the above problems and provide at least the advantages described below. An aspect of the present invention is to provide a method and apparatus for connecting a service between user devices using a voice capable of simplifying a procedure for service connection between at least two user devices.
According to an aspect of the present invention, a method of connecting a service between a device and at least one other device is provided. The method includes recording, by the device, a user voice input in a state where a voice command button has been input; outputting first information based on the recorded user voice when an input of the voice command button is cancelled; receiving, by the device, second information corresponding to the first information; recognizing a service type according to the first information and the received second information; connecting the device to a subject device in an operation mode of the device determined according to the recognized service type; and performing a service with the connected subject device.
According to another aspect of the present invention, a method of connecting a service between a device and at least one other device using a voice is provided. The method includes recording, by the device, a timestamp and waiting for reception of an input of a user voice when a voice command button is input; receiving input of the user voice, recording the received user voice, and generating recording data based upon the received user voice; generating, when the input of the voice command button is cancelled, voice information according to the recorded timestamp, the recording data and a device address of the device; transmitting the generated voice information to a server; receiving service information from the server; checking an operation mode of the device, the device address of a subject device for connection and a type of an execution service according to the service information; connecting the device to a subject device according to the checked operation mode; and performing, upon connecting to the subject device, a service with the connected subject device according to the checked type of the execution service and according to the checked operation mode.
According to another aspect of the present invention, a method of connecting a service using a voice is provided. The method includes recording, by a first device, a timestamp and waiting for reception of an input of a user voice when a voice command button is input; receiving input of the user voice and generating recording data by recording the input user voice; generating, when the input of the voice command button is cancelled, first voice information that includes the recorded timestamp, the recording data, and a device address of the device; loading the generated voice information as audio data and outputting the loaded audio data through a speaker of the first device; receiving second voice information output through a speaker of a second device through a microphone of the first device; checking, from the first voice information and the second voice information, an operation mode of the first device, a device address of the second device for establishing a connection with the first device, and a type of an execution service; connecting the first device to the second device according to the operation mode; and performing, upon connecting the first device to the second device, a service according to the type of the execution service and according to the operation mode of the first device.
According to another aspect of the present invention, a method of connecting a service between a device and at least one other device using a voice is provided. The method includes recording a timestamp and waiting for reception of an input of a user voice when a voice command button is input; receiving input of the user voice and generating recording data by recording the input user voice; generating, when input of the voice command button is cancelled, an authentication key having a unique string by using a voice waveform of the recording data and the recorded timestamp; changing device information for identifying the device using the authentication key; searching for a subject device to be connected with the device having device information corresponding to the authentication key at a preset communication mode; performing a connection between the device and the subject device through transmission of a connection request and reception of a connection approval; and performing, upon performing the connection with the subject device, the service.
According to another aspect of the present invention, a device for supporting a service connection between the device and at least one other device by using an input voice is provided. The device includes a storage unit for storing at least one program; and a controller for executing the at least one program to record a user voice input in a state where voice command button has been input, output first information based on the recorded user voice when an input of the sir voice command button is cancelled, receive second information corresponding to the first information, recognize a service type according to the first information and the second information, connect the device to a subject device in an operation mode of the device determined according to a service type, and perform a service with the connected subject device.
According to another embodiment of the present invention, a non-transitory computer-readable recording medium having recorded a program for performing a method of connecting a service between a device and at least one other device is provided. The method includes recording, by the device, a user voice input in a state where a voice command button has been input; outputting first information based on the recorded user voice when an input of a voice command button is cancelled; receiving, by the device second information corresponding to the first information; recognizing a service type according to a the first information and the received second information; determining an operation mode of the device according to the recognized service type; and performing a service connection between the device and a subject device according to the determined operation mode.
Another aspect of the present invention provides a computer program including instructions arranged, when executed, to implement a method, system and/or apparatus, in accordance with any one of the above-described aspects. Another aspect of the present invention provides a machine-readable storage that stores such a program.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, describe example embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects and features and advantages of the present invention will be more apparent from the following detailed description hi conjunction with the accompanying drawings, in which:
FIG. 1 schematically illustrates an operation of connecting a service between user devices according to an embodiment of the present invention;
FIG. 2 schematically illustrates a configuration of a user device according to an embodiment of the present invention;
FIG. 3 schematically illustrates a platform structure of a user device for processing a function according to an embodiment of the present invention;
FIG. 4 is a signal flowchart illustrating a process of a service connection between user devices according to an embodiment of the present invention;
FIG. 5 illustrates an example where voice information used in a service connection of user devices is stored according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a process of a service connection based on a user voice in a user device according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a process of authenticating user devices in a server according to an embodiment of the present invention;
FIG. 8 schematically illustrates a concept of connecting a service between user devices according to an embodiment of the present invention;
FIG. 9 is a signal flowchart illustrating a process of a service connection between user devices according to an embodiment of the present invention;
FIG. 10. is a flowchart illustrating a service connection based on a user voice in a user device according to an embodiment of the present invention; and
FIGS. 11 and 12 are flowcharts illustrating a process of processing a service connection based on a user voice in a user device according to an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE PRESENT INVENTION
Embodiments of the present invention are described as follows with reference to the accompanying drawings in detail. The following description is provided to assist in a comprehensive understanding of the present invention, as defined by the claims. The description includes various specific details to assist in that understanding but these are to be regarded as merely examples. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the invention.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of the embodiments of the present invention is provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims.
The same or similar reference numbers may be used throughout the drawings to refer to the same or similar parts. Detailed descriptions of well-known processes, functions, constructions and structures may be omitted for clarity and conciseness, and to avoid obscuring the subject matter of the present invention.
Throughout the description and claims of this specification, the words “comprise”, “include” and “contain” and variations of the words, for example “comprising” and “comprises”, means “including but not limited to”, and is not intended to (and does not) exclude other elements, features, components, integers, steps, processes, functions, characteristics, and the like.
Throughout the description and claims of this specification, the singular (e.g. “a”, “an”, and “the”) encompasses the plural unless the context otherwise requires. Thus, for example, reference to “an object” includes reference to one or more of such objects.
Elements, features, components, integers, steps, processes, functions, characteristics, and the like described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
It will be also be appreciated that, throughout the description and claims of this specification, language in the general form of “X for Y” (where Y is some action, process, activity or step and X is some means for carrying out that action, activity or step) encompasses means X adapted, configured, or arranged specifically, hut not exclusively, to do Y.
The present invention relates to a method, apparatus and system for connecting a service between user devices (e.g. connecting the user devices to enable the user devices to participate in a service), and more particularly, to a method, apparatus and system for connecting a service between user devices using a user voice input through the user devices. According to embodiments of the present invention, an authentication process for connecting the user devices, and for connecting a service between the connected user devices, is automatically executed using a user voice input in the user devices.
According to embodiments of the present invention, a user inputs a command corresponding to a service to be executed in at least two user devices. Specifically, the command is input to devices that the user intends to be connected with each other through the service (e.g., through a device connection, data sharing after device connection, execution of a designated function after device connection (e.g., a speaker mode, WiFi display, etc.)). The command may be input in the form of a voice command. Then, respective user devices determine a master mode (or Access Point (AP) mode) and a slave mode (or non-AP mode), and automatically execute a service connection corresponding to the voice command in the determined mode. In this manner, respective user devices may automatically perform mutual connection according to a user's voice command, and perform a service using the connection.
According to embodiments of the present invention, the device operating in the master mode may be a device, from among at least two user devices, in which an input for performing a service connection process has been performed via a voice command button, and may be device that transmits data. Further, the device operating in the slave mode is a device, from among the at least two user devices, in which an input for performing a service connection process is performed later (i.e., later than a corresponding input in the master device) via a voice command button, and may be a device that receives data from the device of the master mode.
According to an embodiment of the present invention, authentication for a service connection between user devices may be performed using a voice waveform input from user, and the determination of the master mode and the slave mode of the connected respective user devices may be performed through the time analysis of the voice input from user. Further, the service to be executed after being connected by respective user devices may be determined through the parameter analysis (or through a command) corresponding to a voice input from user. Authentication for a service connection between such devices may be performed by communicating or linking with a server, or by using data exchanged between devices.
For example, when linked with the server, a device (e.g., a first device) records a time stamp when a voice command button is input or actuated (e.g., when a user pushes the voice command button). The first device waits for the reception of the user voice input while actuation of the voice command button is maintained (e.g., while the voice command button remains pushed by the user), and generates recording data by recording a user voice input through a microphone. When actuation of the voice command button is cancelled (e.g., when a user releases the pushed voice command button), the first device generates voice information including the time stamp, the recording data and the device address of the first device, and transmits the generated voice information to the server. In response, the first device receives service information corresponding to the voice information from the server, and checks the operation mode of the first device, the device address of a second device to which the first device is to be connected, and the type of the executed service from the received service information. Further, a connection is established between the first device and the second device according to the operation mode, and thereafter the first device performs a service according to the executed service type according to the operation mode when connected with the second device.
Further, when data is directly exchanged between devices themselves, the first device records the time stamp at the time when the voice command button is actuated, awaits reception of the user voice input, and generates recording data by recording a voice input by the user through a microphone. When actuation of the voice command button is canceled, the first device generates first voice information including the time stamp, the recording data and the device address of the first device, generates a sound using the generated voice information, and outputs the sound corresponding to the voice information through a speaker so that a second adjacent device may detect the sound. In response, the second device outputs a sound corresponding to second voice information through a speaker, so that the first device may detect the sound through a microphone and receive the second voice information as an input. The first device checks the operation mode of the first device, the device address of the second device to which the first device is to be connected, and the type of the execution service from the first voice information and the second voice information. Further, a connection is established between the first device and the second device, and thereafter the first device performs a service according to the type of the executed service corresponding to the operation mode when connected with the second device.
Hereinafter, the configuration of a user device and a method of controlling the user device according to an embodiment of the present invention are described with reference to the drawings. The configuration of the user device and the method of controlling operation thereof according to an embodiment of the present invention are not limited to the description below, and thus it should be noted that the present invention may be applied to various other embodiments, for example, based on the embodiments below.
FIG. 1 is a diagram schematically illustrating an operation of connecting a service between user devices according to an embodiment of the present invention.
As illustrated inFIG. 1, a system for performing a service connection based on a user voice according to an embodiment of the present invention includes afirst user device100, asecond user device200 and aserver300. In the present example, a service connection operation is described using twouser devices100 and200. However, embodiments of the present invention are not limited thereto. For example, service connections between two or more user devices are also possible in accordance with embodiments of the present invention.
Thefirst user device100 and thesecond user device200 include avoice command button150 for instructing therespective devices100 and200 to wait for reception of a voice input from user in order to perform service connection based on a voice. In accordance with embodiments of the present invention, thevoice command button150 may be implemented in various ways, for example, in the form of a hardware button, a soft interface (e.g., a Graphical User Interface, GUI), etc. A user may input a voice (e.g., a voice command) while thevoice command button150 of thefirst user device100 and thesecond user device200 is pushed.
Thefirst user device100 and thesecond user device200 wait for reception of a user voice input upon sensing an input via thevoice command button150. At this time, thefirst user device100 and thesecond user device200 may record the value of the time at which thevoice command button150 is actuated as a timestamp. Further, thefirst user device100 and thesecond user device200 record a user voice (generate recording data) input while thevoice command button150 is pushed, and performs voice recognition on the recorded voice (recording data) when actuation of thevoice command button150 is canceled. Thefirst user device100 and thesecond user device200 may transmit voice information including recording data (particularly, the waveform), timestamp, and device identifier to theserver300 when a command for establishing a connection between user devices is detected as a result of voice recognition.
Thefirst user device100 and thesecond user device200 transmit voice information to theserver300, and receive service information corresponding to the voice information from theserver300. Further, when receiving service information from theserver300, thefirst user device100 and thesecond user device200 establish a connection such that thefirst user device100 and thesecond user device200 are mutually connected according to the service information, and thefirst user device100 and thesecond user device200 execute the service. At this time, thefirst user device100 and thesecond user device200 analyze the received service information, and determine the service type, connection information (device information of the device to be connected), execution information, etc. Further, thefirst user device100 and thesecond user device200 determine the master mode or slave mode according to the service type, perform connection according to a communication method that is set with another device according to connection information in a determined mode, and execute a service according to execution information, when connected with a device corresponding to connection information.
According to an embodiment of the present invention, the communication method may include various wireless communication methods, for example, a Wireless Local Area Network (WLAN) connection, Bluetooth connection, WiFi direct connection, WiFi display connection, Near Field Communication (NFC) connection, etc. The communication method used in the connection may be determined using any suitable determination scheme. For example, the method may be determined according to a user setting or a default setting, may be automatically determined according to a recently performed connection method, or may be randomly determined.
According to embodiments of the present invention, the execution information represents the type of a service to be executed. The type of service to be executed may include, for example, data sharing (transmission), a left and right speaker linkage function, and information indicating certain functions (operations) of the input and output linkage function, etc.
Theserver300 searches for at least two devices having transmitted the same recording data during the same time zone (e.g., period), and determines the searched devices as a set of devices for service connection between the searched devices. Further, information on the service for supporting service connection between a set of determined devices may be respectively transmitted to the group of devices.
Specifically, theserver300 stores voice information from various user devices, for example, thefirst user device100 and thesecond user device200. In particular, theserver300 receives voice information from the user devices (e.g., thefirst user device100 andsecond user device200 ofFIG. 1), and may store the received voice information in the database by dividing or categorizing the received voice information according to device. At this time, when storing the voice information, theserver300 parses the voice information to extract device information, recording data (waveform), timestamp, connection information (e.g., device addresses), and raw data, etc. and then stores the information.
When voice information is received from the user device, theserver300 compares and analyzes the voice waveform according to the voice information for connection authentication of the user device having transmitted voice information. For example, a voice waveform coinciding with the voice waveform of the received voice information is compared and detected in the database.
When the same voice waveform (e.g., a matching voice waveform) is detected, theserver300 compares timestamps in the voice information sets of respective user devices having the same voice waveform, and determines the service type of the operation mode of each user device according to the time difference according to each timestamp. Theserver300 recognizes a voice parameter (command) from the recording data of the voice information and generates execution information. Further, theserver300 respectively generates service information to be transmitted to each user device using the service type, connection information and execution information per user device, and respectively transmits the generated service information to the user devices determined as having the same voice waveform. As this time, assuming that the determined user devices are thefirst user device100 and thesecond user device200, another service information set may be provided to thefirst user device100 and thesecond user device200.
For example, the service information transmitted to thefirst user device100 may be service type information indicating operation at the master mode, connection information indicating the device identifier (e.g., the device address, etc.) of thesecond user device200, and the first service information having the execution information on the service to be executed. Further, the service information transmitted to thesecond user device200 may be service type information indicating operation at the slave mode, connection information indicating the device identifier (e.g., the device address, etc.) of thefirst user device200, and the second service information having the execution information on the service to be executed.
According to an embodiment of the present invention having the above configuration,respective user devices100 and200 provide a hardware or software typevoice command button150 for supporting service connection based on a user voice. The user pushes respectivevoice command buttons150 of two or more user devices to be mutually connected within a simultaneous or preset error range as in thefirst user device100 and thesecond user device200. Further, users input (via speech) a desired voice service command and cancel the pushed state of respectivevoice command buttons150 after pushing respectivevoice command buttons150.
Thefirst user device100 and thesecond user device200 perform voice recognition on the user voice, which is input through a microphone and is stored at the time of detecting the cancellation of the input based on thevoice command button150, through a user voice recognition module. Further, if a predefined command (e.g., “device connection”) for initiating a service connection between devices is detected in the result according to the voice recognition, voice information needed in service connection, such as recording data, timestamps and device identifier (e.g., the device address), may be respectively transmitted to theserver300.
If voice information is received from user devices (e.g., thefirst user device100 and the second user device200), the voice information is stored, and the respective stored information sets are compared, and thereby a search for at least two user devices having related voice information sets is performed. For example, theserver300 may receive the same waveforms (or similar voice waveforms within a certain error range) or may search for two or more user devices having similar timestamps within a certain error range. Theserver300 may use any suitable technique, for example a heuristic approach when comparing voice information. That is, theserver300 does not necessarily search for completely coincided or matched data (voice information), but may search for sufficiently similar data (same or similar data within a certain error range). For example, theserver300 may use a method of analyzing only certain variables (e.g., relatively important variables) without necessarily considering all variables at the initial step of analysis, gradually extending the range of variables, and narrowing corresponding data. Hereinafter, coinciding data is determined based upon the comparison of at least two sets of data for the convenience of explanation.
Further, theserver300 may analyze recording data of the voice information when two or more user devices having the related voice information are searched. At this time, respective searched voice information sets are analyzed in the recording data analysis. In one example according to an embodiment of the present invention, recording data of respective voice information sets may be the same recording data input from one user, and thus only one recording data set may be analyzed. For example, theserver300 may recognize a voice command (or a voice parameter) (for example, “device connection”, “device connection current image file transmission” “device connection WiFi display”, “device connection speaker”, “device connection phonebook transmission”, etc.) through the analysis (e.g., voice recognition) of recording data. Accordingly, theserver300 may recognize the type of a service intended to be performed through connection between devices.
When the coinciding user device and the types of services intended to be performed are determined according to the above operation, theserver300 generates service information including the service type, connection information and execution information by determined user devices, and transmits the generated service information to each user device.
A set of user devices (e.g., thefirst user device100 and the second user device200), which respectively receive service information from theserver300, determines the master mode or slave mode from the received service information according to the service type, and be connected to a user device (e.g., the first user device or the second user device200) of an object to be connected according to the connection information at the corresponding mode, and may execute the service according to the execution information. For example, thefirst user device100 and thesecond user device200 may form a wireless LAN link, and when a mutual connection is established, the user device determined as master may transmit the currently displayed image file to the opponent user device. Further, the user device determined as master may operate as a left speaker and the user device determined as slave may operate as a right speaker (or vice versa), thereby respectively outputting audio of a media file replayed in the master device. Further, the user device determined as master may operate as an input means and the user device determined as slave may operate as a display means so that information input by the master user device may be displayed through the slave user device. Further, data displayed in the user device determined as a master may be transmitted to the user device determined as a slave so that the data displayed in the master user device may be shared with the slave user device so as to be displayed.
FIG. 2 is a diagram schematically illustrating a configuration of a user device according to an embodiment of the present invention. InFIG. 2, the user device represents thefirst user device100 and thesecond device200, and the configuration ofFIG. 2 may be implemented to both thefirst user device100 and the second user device.
Referring toFIG. 2, a user device according to an embodiment of the present invention may include awireless communication unit210, auser input unit220, adisplay unit230, anaudio processing unit240, astorage unit250, aninterface unit260, acontroller270 and apower supply unit280. In certain embodiments of the present invention, one or more of the essential components of the user device illustrated inFIG. 2 may be omitted. Therefore, a user device according to certain alternative embodiments of the present invention may include more or fewer components than those illustrated inFIG. 2.
Thewireless communication unit210 includes at least one module that allows wireless communication between the user device and a wireless communication system or between the user device and a network where the user device is located. For example, thewireless communication unit210 may include one or more of amobile communication module211, a wireless local area network. (WLAN)module213, a shortrange communication module215, alocation calculation module217, and abroadcast reception module219.
Themobile communication module211 transmits and receives wireless signals with at least one of a base station, an external device, and a server on a mobile communication network. The wireless signal may include various forms s of data according to transmission and reception of a voice call signal, a video call signal or a text/multimedia message. Themobile communication module211 transmits voice information to apredefined server300 through the mobile communication network, and receives service information corresponding to the voice information according to the service connection mode of the user device. According to an embodiment of the present invention, the voice information may include recording data of user voice and related information that is necessary for performing a service connection between user devices. The related information may include the timestamp and the user device's identification information (e.g., address, identifier, phone number, etc.). Further, according to an embodiment of the present invention, the service information includes mode information (e.g., master mode or slave mode) determined for the user device, function information to be performed in the determined mode, and device information for the user device of the device to be connected for service connection.
Thewireless LAN module213 is a module for performing a wireless Internet connection and for forming a wireless LAN link with another user device, and may be internally or externally installed in the user device. Some examples of wireless Internet technologies are wireless LAN (WLAN), WiFi, Wireless broadband (Wibro), World interoperability for Microwave Access (Wimax), and High Speed Downlink Packet Access (HSDPA). Thewireless LAN module213 may transmit voice information defined in the present invention or receive service information from theserver300 through wireless Internet. When the connection method for service connection of the user device is set to the wireless LAN s method, thewireless LAN module213 forms a wireless LAN link with the user device corresponding to the service information.
Thewireless communication module215 is a module for performing short range communication. Some examples of short range communications include Bluetooth, Radio Frequency IDentification (RFID), InfraRed Data Association (IrDA), Ultra WideBand (UWB), ZigBee, and Near Field Communication (NFC). If the connection method for service connection of the user device is set to a short range communication method, the shortrange communication module215 may form a short range communication link with the user device corresponding to the service information.
Thelocation calculation module215 is a module for acquiring the location of the user device, and a representative example thereof is a Global Position System (UPS). Thelocation calculation module215 may calculate distance information from three or more base stations (or satellites) and accurate time information and then apply trigonometry to the calculated information, thereby acquiring the three-dimensional current location information according to latitude, longitude and altitude. Further, thelocation calculation module215 may calculate location information by continually receiving the current location of the user device in real time from three or more satellites. The location information of the user device may be acquired in various methods.
Thebroadcast reception module219 receives a broadcast signal (e.g., a TeleVision (TV) broadcast signal, a radio broadcast signal, a data broadcast signal, etc.) and/or the broadcast-related information (e.g., information related with a broadcast channel, broadcast program or broadcast service provider) from an external broadcast management server through a broadcast channel (e.g., a satellite channel, a terrestrial channel, etc.). The broadcast signals received through thereception module219 according to an embodiment of the present invention may be transmitted (or streamed) to the opponent user device and then be displayed.
Theuser input unit220 generates input data for operation control of the user device by a user. Theuser input unit220 may include one or more of a keypad, a dome switch, a touch pad (constant voltage/constant current), a jog wheel, a jog switch, etc. In particular, theuser input unit220 may include avoice command button150, for example, of a hardware or soft interface type that initiates the process for a voice-based service connection of the present invention.
Thedisplay unit230 displays (outputs) information processed in the user device. For example, if the user device is at a calling mode, a calling-related to User Interface (UI) or Graphic User Interface (GUI) is displayed. Further, thedisplay unit230 displays a photographed or/and received image or UI and GUI when the user device is operating in a video call mode or a photographing mode. Thedisplay unit230 displays a UI or GUI related with an internally or externally collected message. Thedisplay unit230 may display an execution screen executed at the time of a voice-based service connection of the present invention. For example, when thevoice command button150 is input, a UI or GUI, which is related with a guide screen that guides a voice input, an authentication processing screen according to authentication performance by an input user voice, and a service execution screen of mode information determined according to service information, device information of a device to be connected, execution information of a connected device and an executed service, etc., may be displayed.
Thedisplay unit230 may include at least one of a Liquid Crystal Display (LCD), a Thin Film Transistor-Liquid Crystal Display (TFT LCD), a Light Emitting Diode (LED), an Organic LED (OLED), an Active Matrix OLED (AMOLED), a flexible display, a bended display, and a three-dimensional (3D) display. Some of these displays may be implemented as a transparent display (including either of a transparent type or a light-transmitting type transparent display), so that an opposite side of the display may also be viewed.
According to an embodiment of the present invention, when thedisplay unit230 and a touch panel for sensing a touch operation form a mutual layer structure (hereinafter, referred to as a “touch screen”), thedisplay unit230 may also be used as an input device as well as an output device. In this case, the touch panel may be configured to convert changes of pressure applied to a certain part of thedisplay unit230 or capacitance generated on a certain part of thedisplay unit230 into electric input signals. The touch panel may be configured to detect pressure corresponding to the touch, as well as detect the touched location and area. When a touch input for the touch panel is received, the corresponding signals are transmitted to a touch controller (not shown). The touch controller processes the signals and transmits corresponding data to thecontroller270. As such, thecontroller270 is informed of which part of thedisplay unit230 has been touched.
Theaudio processing unit240 transmits audio signals input from thecontroller270 to thespeaker241, and transmits audio signals such as voice input from themicrophone243 to thecontroller270. Theaudio processing unit240 converts voice/sound data into audible sound through the speaker according to the control of thecontroller270, and converts audio signals such as a voice received from themicrophone243 into digital signals so as to transmit the converted signals to thecontroller270. In particular, theaudio processing unit240 according to embodiments of the present invention outputs sound containing voice information (particularly, recording data (voice waveform), timestamp, etc.) through thespeaker241 under control of thecontroller270, receives sounds containing voice information, and transmits the sounds to thecontroller270. According to an embodiment of the present invention, at least one of theaudio processing unit240 and thespeaker241 includes an additional circuit or electronic parts (e.g., a resistor, condenser, etc.) so that output sounds are input through themicrophone243 of another user device that exists in an adjacent area.
Thespeaker241 may output audio data that is received from thewireless communication unit210 while operating in a calling mode, recording mode, a voice recognition mode, a broadcast reception mode, etc., or output audio data is stored in thestorage unit250. Thespeaker241 outputs sound signals related with a function performed in the user device (e.g., a replay of a calling signal reception sound, a message reception sound, and music content, etc.). Thespeaker241 may output sounds including voice information according to a predetermined output intensity according to an embodiment of the present invention, which is described in further detail herein below.
Themicrophone243 receives external sound signals while operation in the calling mode, recording mode, voice recognition mode, etc., and processes the received external sound signals as electric voice data. The voice data is converted into a form that may be transmitted to a mobile station base station through themobile communication module211, and then the converted data is output. Various noise removing algorithms for removing noises generated in the process of receiving external sound signals may be implemented in themicrophone243. Themicrophone243 may receive sounds output from the speaker (not shown) of another user device and then transmit the sounds to thecontroller270 according to an embodiment of the present invention.
Thestorage unit250 stores a program for processing and controlling of thecontroller270, and performs a function for temporarily storing input/output data (e.g., a phone number, message, audio, still image, electronic book, moving image, voice information, log information, etc.). The use frequency for each item of the data (e.g., the use frequency of each phone number, each message and each multimedia, etc.) and the importance thereof may be stored together in thestorage unit250. Further, data about various patterns of vibrations and sounds output at the time of a touch input on the touch screen may be stored in thestorage unit250. In particular, thestorage unit250 may store a communication scheme to be executed at the time of service connection, service information received from theserver300, a service start command (e.g., “device connection”, etc.), a service command (e.g., “transmit file”, “share address list”, “execute speaker”, and “connect keyboard”, etc.). According to an embodiment of the present invention, the start command and service command may be defined according to a user's settings or in any other suitable way. With respect to a service command, function information of a function to be operated by the user device (e.g., information indicating whether the user device will operate as an input means or as a display means at the time of a keyboard connection, information indicating whether the user device will be in charge of the left sound or the right sound when the speaker is executed, etc.) may be mapped together. Further, thestorage unit250 may store the platform ofFIG. 3, which is described herein below.
Thestorage unit250 includes at least one of a flash memory type, hard disk type, micro type and card type (e.g., Secure Digital (SD) or XD) memory, a Random Access Memory (RAM), a Static RAM (SRAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable ROM (EEPROM), a Programmable ROM (PROM), a Magnetic RAM (MRAM), a magnetic disk and optical disk type memory, etc. The user device may also utilize web storage for performing a storage function of thestorage unit250 in a remote Internet location when operated.
Theinterface unit260 provides a connection path to external devices connected to the user device. Theinterface unit260 receives data from the external device, is supplied power to transmit power to each component inside the user device, and/or transmits data inside the user device to an external device. For example, a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device including an identification module, an audio Input/Output (I/O) port, a video port, an earphone port, etc. may be included in theinterface unit260.
Thecontroller270 controls overall operation of the user device. For example, thecontroller270 performs related control and processes for voice calls, data communication, video calls, etc. Thecontroller270 includes a multimedia module (not shown) for playing or replaying of multimedia. The multimedia module (not shown) may be implemented in thecontroller270 or may be implemented separately from thecontroller270. In particular, thecontroller270 may control overall operation related with automatic connection and execution of services between user devices by using a user voice input through the user device of the present invention.
For example, when a service is started by thevoice command button150, thecontroller270 controls generation of voice information based on a voice input through themicrophone243, and controls voice information, which is generated at the time of sensing a cancellation by thevoice command button150, to be transmitted to the service through thewireless communication unit210 or to be externally outputted through thespeaker241. Further, when service information is received from theserver300, thecontroller270 determines the mode of the user device according to the received service information, and controls service connection and service execution according to the determined mode. Further, when sound information is received from another user device through themicrophone243, thecontroller270 determines the mode of the user device according to the received voice information and controls service connection and service execution according to the determined mode. Accordingly, thecontroller270 is in charge of overall control related with service connection between user devices, as well as service execution functions using voice in accordance with embodiments of the present invention. The detailed control operations of thecontroller270 are described herein with respect to an operation example of a user device and control method thereof with reference to drawings.
Thepower supply unit250 is supplied with external and internal power and supplies power necessary for operation of each component by control of the controller.
Though not illustrated inFIG. 2, a voice recognition module (not shown) may be stored or loaded in at least one of thestorage unit250 and thecontroller270, or may be implemented as a separate component.
Further, various embodiments described in accordance with embodiments of the present invention may be implemented in software, hardware, or a recording medium that is readable by a computer or the like, or a combination thereof. According to a hardware implementation, embodiments of the present invention may be implemented using at least one of Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and electric units for performing other functions. In some cases, certain operations performed according to embodiments of the present invention may be performed by thecontroller270. According to a software implementation, embodiments such as a procedure and function described in the present specification may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described herein.
Further, the user device ofFIG. 1 of the present invention may include all devices using an application processor, a Graphic Processing Unit (CPU) and a Central Processing Unit (CPU), for example, all information communication devices, multimedia devices and their application devices that support the functions of embodiments of the present invention. For example, the user device may include various devices such as a tablet Personal Computer (PC), a smart phone, a digital camera, a Portable Multimedia Player (PMP), a media player, a portable game console, a laptop computer, and a Personal Digital Assistant (PDA) as well as mobile communication terminals operated according to each communication protocol corresponding to various communication systems. Also, methods of controlling functions according to embodiments of the present invention may be applied to various display devices such as a digital TV, Digital Signage (DS) and a Large Format Display (LFD).
FIG. 3 schematically illustrates a platform structure of a user device for processing a function according to an embodiment of the present invention.
Referring toFIG. 3, the platform of the user device according to an embodiment of the present invention may include Operating System (OS) based software to perform various operations related with service connection that uses the above voice. As illustrated inFIG. 3, the user device may be designed to include akernel310, avoice recognition framework330, and asharing service application350.
Thekernel310 is the core of the OS, and performs an operation including at least one of a hardware driver, security of hardware and processor within the device, efficient management of system resources, memory management, provision of an interface for hardware via hardware abstraction, a multi-process, and service connection management. The hardware driver within thekernel310 includes at least one of a display driver, an input device driver, a WiFi driver, a Bluetooth driver, a USB driver, an audio driver, a power manager, a binder driver, and a memory driver.
Thevoice recognition framework330 includes a program that is a basis of an application within thesharing service application350. Thevoice recognition framework330 may be compatible with any application, may reuse components, and movement and exchange may be possible. Thevoice recognition framework330 may include a support program, a program for connecting other software components, etc. For example, thevoice recognition framework330 may include asound recorder351, anengine manager353, aspeech recognition engine355, aconnection manager357, and aservice manager359, etc.
Thesound recorder351 may receive input from themicrophone243, record the user's voice transmitted through thekernel310, and generate recording data.
Theengine manager353 transmits recording data transmitted from thesound recorder351 to thespeech recognition engine355, and transmits voice information to theconnection manager357 according to the result information transmitted from thespeech recognition engine355. Theengine manager353 may record a timestamp for the input time when thevoice command button150 is input. Further, when the analysis result indicates that the voice input is a command that requests service connection from thespeech recognition engine355 has been transmitted, theengine manager353 may generate timestamps, the recording data and voice information including the device address of the device so as to be transmitted to theconnection manager357 and request transmission of the data to be transmitted.
Thespeech recognition engine355 analyzes recording data transmitted through theengine manager353. More specifically, thespeech recognition engine355 analyzes recording data and analyzes whether the start command that requests service connection is included. When understood as a command that requests service connection between devices from the recording data, thespeech recognition engine355 transmits the result of analysis thereof to theengine manager353. According to an embodiment of the present invention, thespeech recognition engine355 converts input signals into a text or voice, and may be composed of a Text-To-Speech (TTS) engine for converting input text into a voice and a Speech-To-Text (SIT) engine for converting a voice into text of a message.
Theconnection manager357 receives voice information from theengine manager353, and transmits the received voice information to theserver300 through thewireless communication unit210. Then theserver300 determines whether other devices have transmitted the same or similar recording data having a timestamp in the same time zone (e.g. period), and if there is a device having the same recording data and a timestamp of the same time zone, service information for service connection with the device is transmitted to the device. Thus, if the service information is received from theserver300, theconnection manager357 transmits the service information to theservice manager359. Further, theconnection manager357 processes a connection of the communication scheme that is set according to the device address of the device to be connected transmitted from theservice manager359. Thereafter, if a connection with the device to be connected is established, theconnection manager357 processes performance of a service related with the device to be connected according to the execution information of the service transmitted in theservice manager359.
Theservice manager359 receives service information from theconnection manager357, and analyzes the received service information. Theservice manager359 analyzes the operation mode of the device, the device address of the device to be connected and the type of the executed service from the service information. At this time, when the service information is received, theservice manager359 collects application information currently under operation in the device from thesharing service application350.
Thesharing service application350 includes various programs that may be operated and displayed in the device. Some examples thereof are a UI application for providing various menus within the device and an application that may be downloaded from an external device or through a network and then stored, and may be freely installed or deleted. Through such applications, the device can perform services such as an Internet phone service by network connection, a Video On Demand (VOD) service, a web album service, a Social Networking Service (SNS), a Location-Based Service (LBS), a map service, a web search service, an application search service, a text/multimedia message service, a mail service, an address list service, a media replay service, etc. Further, various functions, such as a game function and a schedule management function may be performed.
Furthermore, a platform according to embodiments of the present invention may further include middleware (not shown). The middleware (not shown) is located between thekernel310 and theapplication layer350, and may play a role of a medium so that data may be exchanged between other hardware or software sets. As such, thekernel310 enables a standardized interface and support of various environments and mutual link with other affairs having different systems to be provided.
Further, the above-considered platform may be usable in various electronic devices as well as a user device according to embodiments of the present invention. Further, a platform according to embodiments of the present invention may be stored or loaded in at least one of thestorage unit250 and thecontroller270 or a separate processor (not shown). Further, a separate application processor (not shown) for executing an application may be further provided in the device.
FIG. 4 is a signal flowchart illustrating a process of a service connection between user devices according to an embodiment of the present invention, andFIG. 5 is a diagram illustrating an example where voice information used in a service connection of user devices is stored according to an embodiment of the present invention.
Referring toFIG. 4, if avoice command button150 input is received, insteps401 and403, thefirst device100 and thesecond device200 respectively record the user voice input through themicrophone243, insteps405 and407. For example, the user may input (e.g., speak) a start command (e.g., “connect device”) for establishing a connection between devices or a service command (e.g., “connect device transmit file”, “connect device share address list”, etc.) while pushing thevoice command button150 provided in thefirst device100 and thesecond device200, and may cancel the input of thevoice command button150.
When the input of thevoice command button150 is canceled, thefirst device100 and thesecond device200 perform voice recognition on the recorded voice, atsteps409 and411. Thefirst device100 and thesecond device200 determine whether a start command (e.g., “device connection”) for establishing a service connection between devices is detected through the analysis of the recorded voice.
When the start command for service connection is detected, thefirst device100 and thesecond device200 generate voice information sets and respectively transmit the generated voice information sets to theserver300, insteps413 and415. According to embodiments of the present invention, voice information may include recording data of the user's voice and the related information that is needed for service connection between user devices. For example, the voice information may include the recording data voice waveform), a timestamp and a device identifier, etc.
When voice information is received from thefirst device100 and thesecond device200, theserver300 divides the received voice information according to device, and stores the divided voice information in the database, instep417. Further, theserver300 performs analysis on the voice information stored in the database, instep419.
For example, when the voice information is received, theserver300 checks whether there is voice information corresponding to the received voice information in the database. For example, as illustrated inFIG. 5, theserver300 may divide or categorize the received voice information by device, and store device information, recording data (e.g., a voice waveform), timestamp, connection information (e.g., a device address) and raw data, etc. Further, theserver300 may search for voice information that coincides with the stored voice information as inFIG. 5. For example, theserver300 may detect voice waveforms which are determined as the same as the recording data of the received voice information, and determine whether the detected voice waveforms have been generated in the same time zone (e.g., with a delay less than a threshold) by checking the timestamps of the detected waveforms. Further, when the same voice waveform and the same time zone are checked, theserver300 determines that the devices are trying to establish a connection with each other. For example, theserver300 may determine that voice information corresponding to device B and device E inFIG. 5 have the same voice information.
At this time, theserver300 processes voice information analysis according to the order in which the voice information has been received. For example, when the voice information by thefirst device100 is received ahead of thesecond device200, the analysis is performed on the basis of the voice information received from thesecond device100. Further, if the detected voice information corresponding to the voice information received from thefirst device100 is the voice information of thesecond device200, the timestamp of the voice information received from thesecond device200 is checked, and if the timestamp belongs to a same time zone (e.g., within a time period of a certain length), the analysis on the voice information received from thesecond device200 may be omitted.
Theserver300 recognizes a connection method to be performed amongst the devices (e.g., thefirst device100 and the second device200) where coinciding sound information has been provided through the analysis on the voice information, and the service type, instep421. For example, when at least two devices having mutually coinciding voice information are determined to exist, theserver300 checks the timestamp of each device and recognizes the mode at which each device is to be performed according to the difference of the time recorded in the timestamp, and recognizes the service intended to be performed by analyzing the voice command from the recording data. More specifically, theserver300 recognizes which service connection is to be connected between devices through the process of analyzing the recognized voice command (e.g., “connect device”, “connect device transmit file”, etc.).
Theserver300 may generate service information according to the service type for each respective device (e.g., thefirst device100 and the second device200), and respectively transmits the corresponding service information to thefirst device100 and thesecond device200, insteps423 and425. For example, theserver300 may respectively generate service information to be transmitted to each device (thefirst device100 and the second device200) using the service type, connection information and execution information for each device, and may respectively transmit the generated service information to thefirst device100 and thesecond device200. The service information may include the service type, connection information and execution information for each device, and the execution information may be selectively included according to the analyzed voice command type. For example, if the voice command includes only the service start command (“connect device”) as in “connect device”, the execution information may be omitted, and when the voice command includes the service start command (“connect device”) and the service command (“transmit file”) as in “connect device transmit file”, the execution information direction file transmission may be included.
When service information on the transmission of voice information from theserver300 is received, thefirst device100 and thesecond device200 perform a connection that has been mutually determined according to the received service information, instep427. For example, thefirst device100 and thesecond device200 may determine the mode to be operated according to the service information (for example, the service type) and determine the information of the device to be connected according to the service information (for example, connection information). Further, the device operating in master mode from among thefirst device100 and thesecond device200 attempts to connect with a device corresponding to service information (particularly, connection information) according to a preset connection scheme (e.g., a Bluetooth connection, wireless LAN connection, WiFi direct connection, etc.).
Thefirst device100 and thesecond device200 perform a service according to service information when mutually connected, instep429. For example, when a mutual connection is completed, a service corresponding to the device of slave mode to which the device of master mode has been connected and service information (particularly, execution information) may be executed.
For example, when thefirst device100 operates in a master mode and thesecond device200 operates in a slave mode, if execution information corresponds to <transmit file>, the file currently selected (or displayed) in thefirst device100 may be transmitted to thesecond device200. Further, if the execution information corresponds to <share address list>, the address list information of thefirst device100 may be transmitted to thesecond device200. Here, when sharing the address list, the address list of thesecond device200 may be synchronized with the address list of thefirst device100, or the address list of thefirst device100 may be added to the address list of thesecond device200. Further, if the execution information corresponds to <connect game>, the network game connection between thefirst device100 and thesecond device200 may be performed. Further, if the execution information corresponds to <execute speaker>, thefirst device100 may be in charge of outputting the left sound of the surround speaker, and thesecond device200 may be in charge of the right sound of the surround speaker. Further, if the execution information corresponds to <connect keyboard>, thesecond device200 may operate as a virtual keyboard or a virtual mouse so that thesecond device200 may operate as an input means of thefirst device100.
Further, as described above, if execution information is not included in the service information, step S429 of executing a service between thefirst device100 and thesecond device200 may be omitted.
FIG. 6 is a flowchart illustrating a process of establishing a service connection based on a user voice in a user device according to an embodiment of the present invention.
Referring toFIG. 6, when avoice command button150 is received instep601, acontroller270 records a user voice input through themicrophone243, instep603. At this time, when an input (e.g. actuation) of thevoice command button150 is detected, thecontroller270 checks the state of themicrophone243, and if in a deactivated state, thecontroller270 activates themicrophone243. Further, thecontroller270 may control a display of a guide screen for guiding a user's voice input through thedisplay unit230.
When sensing a cancellation of an input of thevoice command button150, thecontroller270 performs recognition on the recorded user voice, instep605, and analyzes the result of the voice recognition, instep607. Further, thecontroller270 determines whether the user voice corresponds to a command for service connection through the recognition performed on the recorded user voice, instep609. More specifically, thecontroller270 determines whether a start command (e.g., “connect device”) for establishing a service connection between devices has been detected through voice recognition.
If a start command for service connection is not detected (“NO” in step609), thecontroller270 may control performance of the operation, instep611. For example, thecontroller270 may perform a search of a user's input internal data (or content, etc.), or perform a search of a market (or a general Internet search).
If a start command for establishing a service connection is detected (“YES” in step609), thecontroller270 generates voice information instep613, and transmits the generated voice information to an already promisedserver300, instep615. For example, thecontroller270 may generate voice information including recording data of a user voice needed for service connection between devices, time information (timestamp) relating to the time thevoice command button150 was inputted, and the device identifier (device address), and transmit the generated voice information to theserver300.
After transmitting the voice information, if a response to the voice information is received from theserver300, instep617, thecontroller270 determines whether the response from theserver300 corresponds to the service information, instep619. If the response of theserver300 is not service information (“NO” in step619), thecontroller270 controls performance of the operation, instep621. For example, if the response received from theserver300 corresponds to error information, but not service information, thecontroller270 controls the output of error information. Further, after error information as output, thecontroller270 re-performs a service connection procedure in response to an input from user, or may terminate the service connection procedure.
If the response of theserver300 corresponds to the service information (“YES” in step619), thecontroller270 analyzes the received service information, instep623. For example, thecontroller270 may determine the service type, the device of the object to be connected, and the execution service with the connected it) device from the received service information.
Thecontroller270 determines the mode (i.e., master mode or slave mode) according to the service information, instep625, and performs a connection with a device to be connected using a set communication method according to the determined mode, instep627. For example, thecontroller270 determines one operation mode among the master mode and the slave mode according to service information, transmits a connection request to a device to be connected in the determined operation mode, receives a connection request from the device to be connected, and performs a connection accordingly.
When connected with the device to be connected, thecontroller270 performs a service corresponding to the device and service information, instep629. For example, as described above, thecontroller270 performs a service corresponding to execution information at master mode or may perform a service corresponding to the execution information at slave mode.
FIG. 7 is a flowchart illustrating a process of authenticating user devices in aserver300 according to an embodiment of the present invention.
Referring toFIG. 7, if voice information is received from a certain device, instep701, theserver300 stores the received voice information, instep703, and performs an analysis on the voice information, instep705. At this time, theserver300 divides or categorizes the received information according to device, and stores the divided or categorized information in the database.
Theserver300 determines whether the voice corresponds to the service connection command through the analysis on the voice information, instep707. For example, theserver300 determines whether the received voice information includes a predefined command such as a start command (e.g., “connect device”) for establishing a service connection between devices.
If the received voice information does not include a service connection command (“NO” at step707), theserver300 processes performance of the operation, instep709. For example, theserver300 determines that the received voice information corresponds to a general voice search word and performs a search of the database according to the search word.
When the received voice information includes a service connection command (“YES” at step707), theserver300 performs a comparison and search to determine whether the voice information corresponding to the received voice information exists in the database, instep711. Based on the results of the search and comparison, theserver300 determines whether there is another set of voice information (hereinafter, referred to as “authenticated voice information”) corresponding to the received voice information through a voice waveform comparison included in the voice information, instep713. More specifically, theserver300 searches for another set of voice information having a voice waveform a that coincides or matches with the recording data of the voice information, and performs a comparison and analysis to determine whether voice information sets were generated during the same time zone or period by checking the timestamp of the searched voice information.
When authenticated voice information corresponding to the received voice information does not exist (“NO” in step713), theserver300 generates error information instep715, and transmits error information to the device having transmitted the voice information, instep717. For example, when theserver300 does not have a device to be connected for the service connection request, theserver300 transmits error information indicating termination of the service connection procedure to the device.
If authenticated voice information corresponding to the received voice information exists (“YES” in step713), theserver300 determines what devices are attempting connection and recognizes the service type using the received voice information and the authenticated voice information, instep719. For example, theserver300 may determine the operation mode (i.e., master mode or slave mode) for devices according to the difference or order of the times when the timestamps were recorded by checking each timestamp. Theserver300 may determine inter-device connection and execution service thereof by checking the command that has been obtained from recording data of the voice information.
Theserver300 generates service information by at least two devices according to the recognized service type, instep721. Further, theserver300 divides service information according to each respective device, instep723. For example, theserver300 may generate service information to be transmitted to each device using the service type, connection information and execution information for each device, and may individually transmit the generated service information to the corresponding respective devices. For example, when a service connection is requested from two devices (e.g., thefirst device100 and the second device200) by the received voice information and authentication voice information, theserver300 respectively generates the first service information to be transmitted to thefirst device100 and the second device information to be transmitted to thesecond device200. Further, theserver300 transmits the first service information to thefirst device100 and transmits the second service information to thesecond device200.
FIG. 8 schematically illustrates a technique for connecting a service between user devices according to an embodiment of the present invention.
As illustrated in the aboveFIG. 1, a system for service connection based on a user voice of the present invention may include afirst user device100 and asecond user device200. Unlike the system configuration ofFIG. 1,FIG. 8 illustrates an example according to a system without configuration performed by theserver300. Further, althoughFIG. 8 illustrates a service connection operation using twouser devices100 and200, embodiments of the present invention are not limited thereto. For example, a service connection for two or more devices is possible in accordance with embodiments of the present invention. Thefirst user device100 and thesecond user device200 perform a function and operation as described with reference toFIGS. 1 to 7. However, in a system environment as inFIG. 8, in thefirst user device100 and thesecond user device200, the functions and operations performed with the server are omitted, and authentication of the user voice without theserver300 is additionally performed.
Referring toFIG. 8,FIG. 8 illustrates an operation where thefirst user device100 is connected with thesecond device200 using wireless LAN. Further, in the present example, thefirst user device100 operates in master mode (i.e., AP) or thesecond user device200 operates at slave mode (i.e., Non-AP). More specifically, in the present example, thevoice command button150 of thefirst user device100 is input earlier than thevoice command button150 of thesecond user device200, and thereby the timestamps of the first andsecond user devices100 and200 are not the same.
Thefirst user device100 and thesecond user device200 include avoice command button150 of a hardware or software interface type that waits for reception of a voice input from user to perform a voice-based service connection. Thefirst user device100 and thesecond user device200 record timestamp when avoice command button150 is input, and record a user voice input in a state where thevoice command button150 is pushed. Further, when an input on thevoice command button150 is canceled, thefirst user device100 and thesecond user device200 perform voice recognition on the recorded voice (i.e., recording data).
When a command for establishing a connection between user devices is detected as a result of voice recognition, thefirst user device100 and thesecond user device200 load voice information including recording data and output the voice information through thespeaker241. For example, the sound containing first voice information generated in thefirst user device100 may be transmitted to an adjacentsecond user device200, and thesecond user device200 may receive sounds outputted from thefirst user device100 through themicrophone243, in addition to, or as an alternative, the sound containing the second voice information generated in thesecond user device200 may be transmitted to an adjacent or nearby first user device100 (e.g., a device that is within a certain distance capable of detecting the sound), and thefirst user device100 may receive the sound outputted from thesecond user device200 through themicrophone243. According to embodiments of the present invention, the sound includes sound waves of audible frequencies and/or non-audible frequencies, and is referred to generally as “sound”. At this time, thefirst user device100 and thesecond user device200 encode recording data and convert the encoded recording data into sounds (i.e., sound waves) of audio frequencies or more than audible frequencies so as to be outputted through thespeaker241. At this time, the voice information includes the timestamp, device address, etc. as well as recording data of the user voice.
Thefirst user device100 and thesecond user device200 output sounds through thespeaker241, and receive sounds output from another user device through themicrophone243 of thefirst user device100 and thesecond user device200. Then thefirst user device100 and thesecond user device200 execute a service by performing a mutual connection using voice information input through themicrophone243 and transmitted voice information.
At this time, thefirst user device100 and thesecond user device200 determine whether the input voice information corresponds to voice information for service connection by authenticating the input voice information. For example, thefirst user device100 and thesecond user device200 process authentication of input voice information through comparison and analysis for determining whether the input voice information includes timestamp or the recording data of the input voice information coincides or matches with the recording data of the transmitted voice information.
When the input voice information corresponds to the authenticated voice information for service connection, thefirst user device100 and thesecond user device200 may determine the operation mode, determine the device to be connected and determine the execution service by analyzing the voice information. For example, thefirst user device100 and thesecond user device200 compare respective timestamps of voice information sets and determine master mode or slave mode according to the recording time difference or order, determine the device to be connected through the device address, and determine an execution service based on the command according to the voice recognition.
Thefirst user device100 and thesecond user device200 perform a connection using a preset communication scheme in an operation mode determined according to the analyzed voice information, and perform a service corresponding to a command at an operation mode determined when mutually connected. For example, thefirst user device100 and thesecond user device200 may form a wireless LAN link, and when a mutual connection is completed, the user device determined as master may transmit a currently displayed image file to the opponent user device. Further, the user device determined as master may operate as a left speaker, and the user device determined as slave may operate as a right speaker so that audios of a media file replayed in the master device may be respectively output. Further, the user device determined as master may operate as an input means and the user device determined as slave may operate as a display means so that information input by the user device of master may be displayed through the user device operating as a slave. Further, the data displayed in the user device determined as master may be transmitted to the user device determined as slave, and the data displayed in the user device may be displayed together with the user device of slave.
FIG. 9 is a signal flowchart illustrating a process of a service connection between user devices according to an embodiment of the present invention. In Hi particular,FIG. 9 illustrates an operation of supporting a direct connection between user devices without theserver300 as illustrated above.
Referring toFIG. 9, if an input of avoice command button150 is received, insteps901 and903, thefirst device100 and thesecond device200 respectively record user voice input through themicrophone243, insteps905 and907. For example, in a state where thevoice command button150 included in thefirst device100 and thesecond device200 is pushed, user inputs (i.e., speaks) a start command (e.g., “connect device”) for establishing a connection between devices or a service command (e.g., “connect device transmit file”, “connect device share address list”, etc.) for device connection and execution, and cancels the input of thevoice command button150.
When the input of thevoice command button150 is canceled, thefirst device100 and thesecond device200 perform voice recognition for the recorded voice, insteps909 and911. The first device and thesecond device200 may determine whether a start command (e.g., “connect device”) for service connection between devices is detected through the analysis of the recorded voice.
When a start command for service connection is detected, thefirst device100 and thesecond device200 generate voice information sets and respectively output the voice information sets through thespeakers241 included in thefirst device100 and thesecond device200, insteps913 and915. For example, thefirst device100 and thesecond device200 may load generated voice information in sounds, and may output the sounds to an adjacent opponent device through each speaker. According to another example, the sound containing the first voice information generated in thefirst device100 may be transmitted to the adjacentsecond user device200 through thespeaker241, and thesecond user device200 may receive sounds outputted from thefirst user device100 through themicrophone243. As an alternative, the sounds containing the second voice information generated in thesecond user device200 may be transmitted to the adjacentfirst user device100 through thespeaker241, and thefirst user device100 may receive sounds output from thesecond user device200 through themicrophone243.
After outputting sounds through thespeaker241, when voice information outputted from an adjacent device is input through themicrophone243, thefirst device100 and thesecond device200 compare voice information input through themicrophone243 and the voice information transmitted through thespeaker241 for authentication, insteps917 and919. More specifically, thefirst device100 and thesecond device200 may determine whether voice information input throughrespective microphones243 corresponds to voice information for service connection by authenticating the input voice information. For example, thefirst device100 and thesecond device200 may perform authentication by performing comparison and analysis on whether input voice information includes timestamp or whether recording data of the input voice information coincides with recording data of the transmitted voice information.
When the input voice information for service connection is successfully authenticated, thefirst device100 and thesecond device200 determine the service type, insteps921 and923. For example, thefirst device100 and thesecond device200 may check the operation mode, check a device to be connected, and check a type of an execution service at the time of connection by referring to the input voice information and the transmitted voice information. Further, thefirst device100 and thesecond device200 determine the operation mode thereof according to the determination of the service type, insteps925 and927, and attempt connection according to the device to be connected or a predetermined connection scheme at each determined operation mode.
When thefirst device100 and thesecond device200 are connected instep929, thefirst device100 and thesecond device200 perform a service corresponding to voice information at each operation mode, instep931. For example, when mutual connection between thefirst device100 and thesecond device200 is completed, thefirst device100 and thesecond device200 perform a service corresponding to voice information with the device operating in slave mode to which the device operating in master mode is connected. For example, assuming that thefirst device100 operates in master mode and thesecond device200 operates in slave mode, the file currently selected (or displayed) in thefirst device100 may be transmitted to thesecond device200, the address information of thefirst device100 may be transmitted to thesecond device200, the first device may be configured to perform output of the left sound of the surround speaker or thesecond device200 may be configured to perform output of the right sound of the surround speaker, or thesecond device200 may operate as a virtual keyboard or a virtual mouse so that the second device may operate as an input means of thefirst device100.
Further, as described above, when the voice information does not include a service command to be executed and includes only a start command for connection between devices, step S931 of executing a service between thefirst device100 and thesecond device200 are omitted.
FIG. 10 is a flowchart illustrating a service connection based on a user voice in a user device according to an embodiment of the present invention.
Referring toFIG. 10, if avoice command button150 is received instep1001, thecontroller270 records a user voice input through themicrophone243, instep1003. At this time, when an input of thevoice command button150 is detected, thecontroller270 checks the state of themicrophone243 and controls a turn-on of themicrophone243 if themicrophone243 is in a deactivated state. Further, thecontroller270 may control a display of a guide screen that guides a user's voice input through thedisplay unit230.
When a cancellation of an input of avoice command button150 is sensed, thecontroller270 performs recognition on the recorded user voice instep1005 and analyzes the recognition result according to the voice recognition, instep1007. Further, thecontroller270 determines whether the user voice corresponds to a command for service connection through the recognition of the recorded user voice, instep1009. More specifically, thecontroller270 determines whether a start command (e.g., “connect device”) for service connection between devices is detected through voice recognition.
If a start command for service connection is not detected (“NO” in1009), thecontroller270 controls performance of the operation, instep1011. For example, thecontroller270 performs a search of internal data (or contents, etc.) for an input voice command of user, or a market (or Internet) search.
If a start command for service connection is detected (“YES” in step1009), thecontroller270 generates voice information instep1013 and loads the generated voice information in sounds and output the sounds through thespeaker241, instep1015. For example, thecontroller270 may convert voice information including recording data of a user voice needed for service connection between devices, time information (i.e., a timestamp) relating to the time at which thevoice command button150 has been input, and the device address for device identification, etc. into sound waves (i.e., sounds) of a certain frequency hand so as to output the converted sound waves through thespeaker241.
After transmitting voice information, thecontroller270 receives next voice information through themicrophone243, instep1017. More specifically, thecontroller270 receives sounds containing voice information output from another device through themicrophone243 after voice information is transmitted. Then thecontroller270 obtains voice information by parsing the received sounds. If a voice command is not included or detected in the received sounds, thecontroller270 disregards the received sounds. More specifically, sounds without a voice command are considered as noise, and an input thereof is disregarded.
Thecontroller270 compares voice information input through themicrophone243 with voice information output through thespeaker241, instep1019, and determines whether the input voice information corresponds to the authenticated voice information, instep1021. For example, thecontroller270 may perform authentication of voice information input through comparison and analysis on whether the input voice information includes a coinciding or matching timestamp or whether the recording data of the input voice information coincides with or matches the recording data of the outputted voice information.
If the input voice information is not authenticated voice information for a service connection (“NO” in step1021), thecontroller270 outputs error information, instep1023. For example, if input voice information is not voice information for service connection, thecontroller270 may output error information informing user of the fact that service connection is not possible.
If the input voice information is authenticated voice information for service connection (“YES” in step1021), thecontroller270 recognizes the service type by referring to the input voice information and the output voice information, instep1025. For example, thecontroller270 determines the operation mode according to the time difference between, or order of, timestamps, the device to be connected, and a service to be executed with the connected device from voice information sets.
Thecontroller270 determines the operation mode according to voice information sets, instep1027, and performs a connection with a device to be connected in a predetermined communication scheme according to a determined operation mode, instep1029. For example, thecontroller270 determines one of operations modes among master mode and slave mode according to voice information sets, transmits a connection request to a device to be connected according to the determined operation mode, and receives the connection request from the device to be connected and perform the connection accordingly.
When connected with the device to be connected, thecontroller270 performs a service with the connected device in the determined operation mode, instep1031. For example, thecontroller270 may perform a service at master mode or perform service at slave mode as described herein above.
FIG. 11 is a flowchart illustrating a process of processing a service connection based on a user voice in a user device according to an embodiment of the present invention.
FIG. 11 shows another operation example that supports a direction connection between user devices without aserver300 as described above, in the present example, user devices for performing a service connection include a first device and a second device, the first device is a device that requests a device connection to the second device according to a predetermined communication scheme (e.g., WiFi Direct or Bluetooth), and the second device is a device that receives a device connection request from the first device,FIG. 11 shows an operation of processing a service connection based on a user voice in the first device.
Referring toFIG. 11, if an input of avoice command button150 is received, instep1101, thecontroller270 records a user voice input through themicrophone243, instep1103. At this time, thecontroller270 records time information (i.e., a timestamp) when detecting an input of avoice command button150. Further, when detecting the input of thevoice command button150, thecontroller270 checks the state of themicrophone243 and turns on the microphone if the microphone is in a deactivated state. Further, thecontroller270 may control a display of a guide screen that guides an input of a user's voice through thedisplay unit230.
When sensing cancellation of an input of thevoice command button150, thecontroller270 performs recognition on the recorded user voice instep1105 and analyzes a result according to the voice recognition, instep1107. Further, thecontroller270 determines whether the user voice corresponds to a command for service connection through recognition of the recorded user voice, instep1109. More specifically, thecontroller270 determines whether a start command (e “connect device”) for establishing a service connection between devices is detected through voice recognition.
If a start command for establishing a service connection is not detected (“NO: in step1109), thecontroller270 controls performance of a corresponding operation, instep1111. For example, thecontroller270 may perform a search of internal data (or contents) for the input user's voice command, or perform a market search (or a general Internet search).
If a start comment for service connection k detected (“YES” in step1109), thecontroller270 generates an authentication key, instep1113. For example, if a device connection is determined according to a start command for establishing service connection, a voice waveform may be extracted from recording data of a user voice that is input through themicrophone243 and is recorded, and may check the input time information (i.e., a timestamp). Further, thecontroller270 may generate an authentication key value by using the voice waveform and time information.
In one example according to an embodiment of the present invention, the authentication value is generated as a unique character (e.g., a unique, letter, number, or alphanumeric character) string by utilizing features of the recorded voice waveform information and time information timestamp). For example, the generated authentication key value may be generated as a random number, such as “2412582952572257239529”, and may be configured from a table of random numbers to which random numbers are mapped. The authentication key value may be divided into a portion indicating the voice waveform and a portion indicating time information. For example, the authentication key value having a string such as “2412582952572257239529” may be divided into a string of a voice waveform portion such as “24125829525722” and a string of a time information portion such as “57239529”.
At this time, the authentication key value, the length of the voice waveform portion, and time information portion may be defined in advance. Further, in the authentication key generated in the first device and the second device, the string of the voice waveform portion may have the same value, and the string of the time information portion may have different values according to the timestamp of the first device and the second device. More specifically, the key values may be generated with different values according to the time difference according to the recording time, and may then be included in the authentication key.
When an authentication key is generated, thecontroller270 generates the authentication key as device information (e.g., a device identifier) used for device identification at the time of inter-device connection and the authentication key is then applied, in step1115. At this time, when generating the authentication key, thecontroller270 controls activation of a communication mode (i.e., a function) for checking a predetermined communication scheme and supporting the checked communication scheme, instep1113. For example, thecontroller270 may control activation of the communication mode by turning on thewireless LAN module213 or the short range communication module215 (e.g., a Bluetooth module). Further, thecontroller270 may change the device identifier to be used in the activated communication mode to device information based on the authentication key.
Thecontroller270 controls a search for a surrounding device according to the communication scheme after setting device information based on the authentication key, instep1117. For example, thecontroller270 may search for a surrounding device to connect with via a WiFi direct connection usingwireless LAN module213 and a search for a surrounding device for short range communication (e.g., Bluetooth communication) using theshort range module215.
Thecontroller270 determines a device to be connected for service connection among devices searched according to the search of the surrounding device, instep1119. In particular, thecontroller270 detects a device having device information corresponding to the previously-generated authentication key among the searched connectable devices, and determines the detected device having the corresponding authentication key as a device to be connected. At this time, thecontroller270 may search for device information of a string coinciding with or matching the string of the user voice waveform portion from the authentication key. More specifically, the string of the time information portion in the authentication key may have a difference in the timestamp according to time that inputs thevoice command button150 in each device, and thus the time information portion may not coincide, and accordingly, the device determines whether the time waveform portion coincides.
When determining the device to be connected, thecontroller270 transmits a device connection request to the device to be connected, instep1121. At this time, thecontroller270 may transmit a request such as a WiFi direct connection request or a Bluetooth connection request according to the determined communication scheme.
After transmitting the connection request, when a connection approval for the connection request is received from the device to be connected instep1123, thecontroller270 performs the connection between the device to be connected and the device of the communication scheme, instep1125.
Thecontroller270 performs a service upon connecting with the device to be connected, instep1127. For example, after connecting to the device to be connected, thecontroller270 may transmit data according to user's request to the device to be connected, or may receive data transmitted from the device to be connected.
Further, as considered above, if the service command is detected through the voice recognition, the service according to the service command may be automatically performed. Thecontroller270 determines whether to operate in either of a master mode or slave mode at the time of automatic service performance, and performs a service in the determined master mode or slave mode. At this time, thecontroller270 compares the authentication key with the string of the time information portion in the device information of the device to be connected to check the time difference, and determines whether to operate in master mode or slave mode according to the checked time difference.
FIG. 12 is a flowchart illustrating a process of processing a service connection based on a user voice in a user device according to an embodiment of the present invention.
FIG. 12 shows another operation example that supports a direct connection between user devices without theserver300. In the present example, the user devices for performing a service connection include a first device and a second device, where the first device requests device connection to the second device according to a predetermined communication method, and the second device receives a device connection request from the first device.FIG. 12 shows an operation of processing a service connection based on a user voice in the second device.
Referring toFIG. 12, if the input of thevoice command button150 is received, instep1201, thecontroller270 records a user voice input through themicrophone243, instep1203. At this time, thecontroller270 records time information (i.e., a timestamp) when detecting an input of thevoice command button150. Further, when detecting an input of thevoice command button150, thecontroller270 may control a display screen for a turn-on control or a voice input guide.
When sensing a cancellation of an input of avoice command button150, thecontroller270 performs recognition on the recorded user voice instep1205, and analyzes the result of the voice recognition, instep1207. Further, thecontroller270 determines whether the user voice corresponds to a command for service connection through the recorded user voice recognition, instep1208. More specifically, thecontroller270 determines whether a start command (e.g., “connect device”) for establishing a service connection between devices is detected through voice recognition.
If a start command for service connection is not detected (“NO” in step1208), thecontroller270 controls performance of a corresponding operation, instep1209. For example, thecontroller270 may perform a search of internal data (or contents, etc) for the input user's voice command or may perform a market search or a general Internet search.
If a start command for service connection is detected (“YES” in step1208), an authentication key is generated, instep1211. For example, if a device connection is determined according to the start command for service connection, the voice waveform is extracted from recording data of the user voice that is input and recorded through themicrophone243, and time information a timestamp) to which thevoice command button150 has been input is checked. Further, thecontroller270 may generate the authentication key value using the voice waveform and time information. In the present example according to an embodiment of the present invention, the authentication key value is generated as a unique string by utilizing the feature of the recorded voice waveform information and time information as considered above.
When the authentication key is generated, if the authentication is connected between devices, thecontroller270 generates the authentication key as device information used in identifying the device (e.g., a device identifier) and the authentication key is applied, in step1213. At this time, when generating the authentication key, the controller checks a communication scheme that is set for establishing service connection and controls activation of the communication mode (i.e., a function) for supporting the checked communication scheme, instep1211. Further, thecontroller270 may change the device identifier to be used in the communication mode to device information based on the authentication key.
Thecontroller270 receives a connection request for service connection from another external device in a state where device information based on the authentication key has been set, instep1215. At this time, after setting device information based on the authentication key, thecontroller270 performs steps S1117 to S1121 as described with reference toFIG. 11. However, in the example according toFIG. 12, a connection request is received from another external device before transmitting a connection request to another external device, and in this case, thecontroller270 may omit the connection request procedure.
When a connection request is received from another external device, thecontroller270 compares the generated authentication key with device information of another device having requested connection instep1217 and determines whether the authentication key coincides with or matches device information, instep1219. In particular, thecontroller270 compares the authentication key with the string of the user voice waveform portion from the device information so as to determine whether the authentication key coincides with or matches the string. More specifically, there may be a difference in the timestamp according to time when thevoice command button150 is input in each device, and thus the time information portion may not coincide, and accordingly, the device determines whether the time waveform portion coincides.
If the authentication key and device information do not coincide with each other (“NO” in step1219), a corresponding operation is performed, instep1221. For example, thecontroller270 may disregard a connection request from another device and may wait for the connection request of the authenticated device for a predetermined time. Further, thecontroller270 may request a service connection by performing steps S1117 to S1127 as described herein with reference toFIG. 11.
If the authentication key coincides with the device information (“YES” in step1219), thecontroller270 transmits a connection approval to another device that has transmitted a connection request, instep1223. For example, thecontroller270 may determine that another device that has transmitted a connection request is a device to be connected. Further, thecontroller270 may transmit a connection approval to the device to be connected in response to the connection request of the device to be connected.
After transmitting the connection approval, thecontroller270 performs a device connection with the device to be connected in the communication scheme, instep1225.
When connected with the device to be connected, thecontroller270 performs a service, instep1227. For example, after connecting with the device to be connected, thecontroller270 transmits data according to the user's request to the device to be connected, or receives data transmitted from the device to be connected. Further, as described above, when a service command is detected through voice recognition, a service according to the service command may be automatically performed. At this time, as described above, thecontroller270 determines whether to operate in a master mode or slave mode with reference to the authentication key and the device information of the device to be connected, era and automatically performs the service in the determined operation mode.
Further, as described above, according to an embodiment of the present invention, a start command for establishing a service connection such as “connect device” is provided. This command is used by the device to distinguish whether the voice input in the voice recognition mode is to be used for a device connection, or to be used for a general voice input for a search of internal device data/contents. Further, the embodiments of the present invention may combine and provide a service command for the service to be executed in the start command that for establishing a service connection. More specifically, a start command for a device connection, such as “connect device transmit file” and a service command (“transmit file”) for service performance between connected devices may be combined so as to be input (i.e., spoken). The service command may be input after or before the start command (e.g., “transmit file connect device”).
It will be appreciated that embodiments of the present invention can be realized in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a Compact Disc (CD), Digital Versatile Disc (DVD), magnetic disk, magnetic tape, etc.
For example, the foregoing embodiments of the present invention may be implemented in an executable program command form by various computer means and be recorded in a computer readable recording medium. The computer readable recording medium may include a program command, a data file, and a data structure individually or a combination thereof. The program command recorded in a recording medium may be specially designed or configured for embodiments of the present invention or be known to a person having ordinary skill in a computer software field to be used. The computer readable recording medium may include Magnetic Media such as hard disk, floppy disk, or magnetic tape, Optical Media such as Compact Disc Read Only Memory (CD-ROM) or DVD, Magneto-Optical Media such as a floptical disk, and a hardware device such as ROM, RAM, flash memory storing and executing program commands. Further, the program command may include a machine language code created by a complier and a high-level language code executable by a computer using an interpreter. The foregoing hardware device may be configured to operate as at least one software module to perform an operation of an embodiment of the present invention, and vice versa.
It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement embodiments of the present invention. Accordingly, embodiments of the present invention provide a program including code for implementing apparatus or a method according to embodiments of the present invention, and further provide a machine-readable storage storing such a program. Still further, such programs may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments of the present invention suitably encompass the same.
As described above, according to a method and apparatus for connecting a service between user devices using a voice according to embodiments of the present invention, a user-desired service is easily and quickly executed by simplifying a procedure for a connection between at least two devices and a service connection according thereto. According to embodiments of the present invention, user may automatically establish connections amongst user devices easily and quickly by speaking inputting) a voice command requesting a device connection in a state where respective voice command buttons of user devices are pushed. Further, user may automatically execute a service by a connection between user devices easily and quickly by speaking a voice command requesting a device connection and a service intended to be actually executed in a state where respective voice command buttons of user devices are pushed.
According to embodiments of the present invention, a master mode and a slave mode between respective user devices are automatically determined according to a time difference where a user voice was inputted to each user device, and when respective user devices are mutually connected, the service according to the user voice (e.g., data transmission, performance of operation according to the mode) may be automatically performed. Thus, a user may automatically initiate execution of the service by simply connecting user devices by merely inputting the voice command according to the service and connecting devices using the voice command button of respective user devices.
Therefore, according to embodiments of the present invention, by implementing an optimal environment for connection between user devices and supporting execution of a service, user's convenience may be improved, and usability, convenience and competitiveness of the user device may be improved. The present invention may be conveniently implemented in ball forms of user devices and other various devices corresponding to such user devices.
Although embodiments of the present invention have been described in detail hereinabove, it should be clearly understood that many variations and modifications of the basic inventive concepts herein taught which may appear to those skilled in the present art will still fall within the spirit and scope of the present invention, as defined in the appended claims and their equivalents.

Claims (8)

What is claimed is:
1. An electronic device comprising:
a memory configured to store at least one instruction; and
a processor configured to execute the at least one instruction to control the device to:
receive a voice input,
transmit first information to a server based on the received voice input, wherein the first information includes time information on which the voice input was received and recording data according to the voice input,
receive, from the server, second information corresponding to the first information, wherein the second information includes information of an operation mode and information of a service,
establish a connection with an external device according to the information of the service and the information of the operation mode, and
perform the service with the external device.
2. The electronic device ofclaim 1, wherein the first information further includes at least one of service information and device information of the external device.
3. The electronic device ofclaim 1, wherein the operation mode includes a master mode and a slave mode.
4. The electronic device ofclaim 1, wherein the first information comprises an authentication key having a unique string generated using a voice waveform of the received voice input.
5. A method comprising:
receiving a voice input;
transmitting first information to a server based on the received voice input, wherein the first information includes time information on which the voice input was received and recording data according to the voice input;
receiving, from the server, second information corresponding to the first information, wherein the second information includes information of an operation mode and information of a service;
establishing a connection with an external device according to the information of the service and the information of the operation mode; and
performing the service with the external device.
6. The method ofclaim 5, wherein the first information further includes at least one of service information and device information of the external device.
7. The method ofclaim 5, wherein the operation mode includes a master mode and a slave mode.
8. The method ofclaim 5, wherein the first information comprises an authentication key having a unique string generated using a voice waveform of the received voice input.
US15/797,9102012-07-032017-10-30Method and apparatus for connecting service between user devices using voiceActiveUS10475464B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US15/797,910US10475464B2 (en)2012-07-032017-10-30Method and apparatus for connecting service between user devices using voice

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
KR1020120072290AKR101972955B1 (en)2012-07-032012-07-03Method and apparatus for connecting service between user devices using voice
KR10-2012-00722902012-07-03
US13/934,839US9805733B2 (en)2012-07-032013-07-03Method and apparatus for connecting service between user devices using voice
US15/797,910US10475464B2 (en)2012-07-032017-10-30Method and apparatus for connecting service between user devices using voice

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US13/934,839ContinuationUS9805733B2 (en)2012-07-032013-07-03Method and apparatus for connecting service between user devices using voice

Publications (2)

Publication NumberPublication Date
US20180047406A1 US20180047406A1 (en)2018-02-15
US10475464B2true US10475464B2 (en)2019-11-12

Family

ID=48917334

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US13/934,839Active2034-08-09US9805733B2 (en)2012-07-032013-07-03Method and apparatus for connecting service between user devices using voice
US15/797,910ActiveUS10475464B2 (en)2012-07-032017-10-30Method and apparatus for connecting service between user devices using voice

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US13/934,839Active2034-08-09US9805733B2 (en)2012-07-032013-07-03Method and apparatus for connecting service between user devices using voice

Country Status (5)

CountryLink
US (2)US9805733B2 (en)
EP (2)EP2683147B1 (en)
KR (1)KR101972955B1 (en)
CN (1)CN104604274B (en)
WO (1)WO2014007545A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180247655A1 (en)*2014-01-102018-08-30Verizon Patent And Licensing Inc.Personal assistant application
US20190287513A1 (en)*2018-03-152019-09-19Motorola Mobility LlcElectronic Device with Voice-Synthesis and Corresponding Methods
US11019427B2 (en)2014-09-012021-05-25Samsung Electronics Co., Ltd.Electronic device including a microphone array
US11594219B2 (en)2021-02-052023-02-28The Toronto-Dominion BankMethod and system for completing an operation

Families Citing this family (204)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8677377B2 (en)2005-09-082014-03-18Apple Inc.Method and apparatus for building an intelligent automated assistant
US9318108B2 (en)2010-01-182016-04-19Apple Inc.Intelligent automated assistant
US8977255B2 (en)2007-04-032015-03-10Apple Inc.Method and system for operating a multi-function portable electronic device using voice-activation
US8676904B2 (en)2008-10-022014-03-18Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US20120309363A1 (en)2011-06-032012-12-06Apple Inc.Triggering notifications associated with tasks items that represent tasks to perform
US10276170B2 (en)2010-01-182019-04-30Apple Inc.Intelligent automated assistant
US10417037B2 (en)2012-05-152019-09-17Apple Inc.Systems and methods for integrating third party services with a digital assistant
KR101972955B1 (en)*2012-07-032019-04-26삼성전자 주식회사Method and apparatus for connecting service between user devices using voice
US8996059B2 (en)*2012-07-192015-03-31Kirusa, Inc.Adaptive communication mode for recording a media message
KR102746303B1 (en)2013-02-072024-12-26애플 인크.Voice trigger for a digital assistant
US9349365B2 (en)*2013-03-142016-05-24Accenture Global Services LimitedVoice based automation testing for hands free module
US9772919B2 (en)2013-03-142017-09-26Accenture Global Services LimitedAutomation of D-bus communication testing for bluetooth profiles
US10652394B2 (en)2013-03-142020-05-12Apple Inc.System and method for processing voicemail
US10748529B1 (en)2013-03-152020-08-18Apple Inc.Voice activated device for use with a voice-based digital assistant
US10176167B2 (en)2013-06-092019-01-08Apple Inc.System and method for inferring user intent from speech inputs
CN110442699A (en)2013-06-092019-11-12苹果公司Operate method, computer-readable medium, electronic equipment and the system of digital assistants
WO2015020942A1 (en)2013-08-062015-02-12Apple Inc.Auto-activating smart responses based on activities from remote devices
US10251382B2 (en)2013-08-212019-04-09Navico Holding AsWearable device for fishing
US9507562B2 (en)*2013-08-212016-11-29Navico Holding AsUsing voice recognition for recording events
JP6305023B2 (en)*2013-11-132018-04-04キヤノン株式会社 COMMUNICATION DEVICE, COMMUNICATION DEVICE CONTROL METHOD, AND PROGRAM
TW201530423A (en)*2014-01-222015-08-01Kung-Lan WangTouch method and touch system
WO2015146179A1 (en)*2014-03-282015-10-01パナソニックIpマネジメント株式会社Voice command input device and voice command input method
TWI601018B (en)*2014-04-102017-10-01拓邁科技股份有限公司Methods and systems for providing data between electronic devices, and related computer program products
WO2015170928A1 (en)*2014-05-082015-11-12Samsung Electronics Co., Ltd.Apparatus and method for changing mode of device
US9715875B2 (en)2014-05-302017-07-25Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
EP3149728B1 (en)2014-05-302019-01-16Apple Inc.Multi-command single utterance input method
US10170123B2 (en)2014-05-302019-01-01Apple Inc.Intelligent assistant for home automation
US9338493B2 (en)2014-06-302016-05-10Apple Inc.Intelligent automated assistant for TV user interactions
US9886953B2 (en)2015-03-082018-02-06Apple Inc.Virtual assistant activation
US9721566B2 (en)2015-03-082017-08-01Apple Inc.Competing devices responding to voice triggers
GB2545151A (en)*2015-03-172017-06-14Yummi Group Singapore Pte LtdA method and system of creating a network to facilitate a multiplayer game
US10460227B2 (en)2015-05-152019-10-29Apple Inc.Virtual assistant in a communication session
US9936010B1 (en)2015-05-192018-04-03Orion LabsDevice to device grouping of personal communication nodes
US9940094B1 (en)*2015-05-192018-04-10Orion LabsDynamic muting audio transducer control for wearable personal communication nodes
US10200824B2 (en)2015-05-272019-02-05Apple Inc.Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
WO2016195545A1 (en)*2015-05-292016-12-08Telefonaktiebolaget Lm Ericsson (Publ)Authenticating data recording devices
US9801219B2 (en)*2015-06-152017-10-24Microsoft Technology Licensing, LlcPairing of nearby devices using a synchronized cue signal
US20160378747A1 (en)2015-06-292016-12-29Apple Inc.Virtual assistant for media playback
US9836129B2 (en)2015-08-062017-12-05Navico Holding AsUsing motion sensing for controlling a display
US10747498B2 (en)2015-09-082020-08-18Apple Inc.Zero latency digital assistant
US10671428B2 (en)2015-09-082020-06-02Apple Inc.Distributed personal assistant
US10331312B2 (en)2015-09-082019-06-25Apple Inc.Intelligent automated assistant in a media environment
US10740384B2 (en)2015-09-082020-08-11Apple Inc.Intelligent automated assistant for media search and playback
KR102356969B1 (en)*2015-09-242022-01-28삼성전자주식회사Method for performing communication and electronic devce supporting the same
KR102393286B1 (en)*2015-09-252022-05-02삼성전자주식회사Electronic apparatus and connecting method
US11587559B2 (en)2015-09-302023-02-21Apple Inc.Intelligent device identification
US9653075B1 (en)*2015-11-062017-05-16Google Inc.Voice commands across devices
US10691473B2 (en)2015-11-062020-06-23Apple Inc.Intelligent automated assistant in a messaging environment
US10956666B2 (en)2015-11-092021-03-23Apple Inc.Unconventional virtual assistant interactions
US10223066B2 (en)2015-12-232019-03-05Apple Inc.Proactive assistance based on dialog communication between devices
US9811314B2 (en)2016-02-222017-11-07Sonos, Inc.Metadata exchange involving a networked playback system and a networked microphone system
US10095470B2 (en)2016-02-222018-10-09Sonos, Inc.Audio response playback
US9965247B2 (en)2016-02-222018-05-08Sonos, Inc.Voice controlled media playback system based on user profile
US10264030B2 (en)2016-02-222019-04-16Sonos, Inc.Networked microphone device control
US9947316B2 (en)2016-02-222018-04-17Sonos, Inc.Voice control of a media playback system
US9772817B2 (en)2016-02-222017-09-26Sonos, Inc.Room-corrected voice detection
US10348907B1 (en)*2016-05-122019-07-09Antony P. NgCollaborative data processing
US10228906B2 (en)*2016-05-302019-03-12Samsung Electronics Co., Ltd.Electronic apparatus and controlling method thereof
US11227589B2 (en)2016-06-062022-01-18Apple Inc.Intelligent list reading
US12223282B2 (en)2016-06-092025-02-11Apple Inc.Intelligent automated assistant in a home environment
US9978390B2 (en)2016-06-092018-05-22Sonos, Inc.Dynamic player selection for audio signal processing
US10586535B2 (en)2016-06-102020-03-10Apple Inc.Intelligent digital assistant in a multi-tasking environment
US12197817B2 (en)2016-06-112025-01-14Apple Inc.Intelligent device arbitration and control
DK179415B1 (en)2016-06-112018-06-14Apple IncIntelligent device arbitration and control
DK201670540A1 (en)2016-06-112018-01-08Apple IncApplication integration with a digital assistant
US10271093B1 (en)*2016-06-272019-04-23Amazon Technologies, Inc.Systems and methods for routing content to an associated output device
CN116631391A (en)*2016-06-272023-08-22亚马逊技术公司System and method for routing content to associated output devices
US10491598B2 (en)2016-06-302019-11-26Amazon Technologies, Inc.Multi-factor authentication to access services
US10134399B2 (en)2016-07-152018-11-20Sonos, Inc.Contextualization of voice inputs
US10152969B2 (en)2016-07-152018-12-11Sonos, Inc.Voice detection by multiple devices
US10115400B2 (en)2016-08-052018-10-30Sonos, Inc.Multiple voice services
US10948577B2 (en)2016-08-252021-03-16Navico Holding AsSystems and associated methods for generating a fish activity report based on aggregated marine data
US9942678B1 (en)2016-09-272018-04-10Sonos, Inc.Audio playback settings for voice interaction
US9743204B1 (en)2016-09-302017-08-22Sonos, Inc.Multi-orientation playback device microphones
US10181323B2 (en)2016-10-192019-01-15Sonos, Inc.Arbitration-based voice recognition
CN106488281A (en)*2016-10-262017-03-08Tcl集团股份有限公司A kind of player method of television audio and control system, TV, communication system
KR20180082043A (en)2017-01-092018-07-18삼성전자주식회사Electronic device and method for connecting communication using voice
US11204787B2 (en)2017-01-092021-12-21Apple Inc.Application integration with a digital assistant
CN107086037A (en)*2017-03-172017-08-22上海庆科信息技术有限公司A kind of voice interactive method of embedded device, device and embedded device
US11183181B2 (en)2017-03-272021-11-23Sonos, Inc.Systems and methods of multiple voice services
KR102275564B1 (en)*2017-04-142021-07-12삼성전자주식회사Electronic device and method for transmitting and receiving authentification information in electronic device
DK201770383A1 (en)2017-05-092018-12-14Apple Inc.User interface for correcting recognition errors
DK180048B1 (en)2017-05-112020-02-04Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
US10726832B2 (en)2017-05-112020-07-28Apple Inc.Maintaining privacy of personal information
DK179496B1 (en)2017-05-122019-01-15Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en)2017-05-122019-05-01Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770427A1 (en)2017-05-122018-12-20Apple Inc.Low-latency intelligent automated assistant
DK201770411A1 (en)2017-05-152018-12-20Apple Inc. MULTI-MODAL INTERFACES
US10303715B2 (en)2017-05-162019-05-28Apple Inc.Intelligent automated assistant for media exploration
US20180336892A1 (en)2017-05-162018-11-22Apple Inc.Detecting a trigger of a digital assistant
DK179560B1 (en)2017-05-162019-02-18Apple Inc.Far-field extension for digital assistant services
US10475449B2 (en)2017-08-072019-11-12Sonos, Inc.Wake-word detection suppression
WO2019038573A1 (en)*2017-08-252019-02-28Leong David Tuk WaiSound recognition apparatus
US10048930B1 (en)2017-09-082018-08-14Sonos, Inc.Dynamic computation of system response volume
US10531157B1 (en)*2017-09-212020-01-07Amazon Technologies, Inc.Presentation and management of audio and visual content across devices
US10446165B2 (en)2017-09-272019-10-15Sonos, Inc.Robust short-time fourier transform acoustic echo cancellation during audio playback
US10482868B2 (en)2017-09-282019-11-19Sonos, Inc.Multi-channel acoustic echo cancellation
US10621981B2 (en)2017-09-282020-04-14Sonos, Inc.Tone interference cancellation
US10051366B1 (en)2017-09-282018-08-14Sonos, Inc.Three-dimensional beam forming with a microphone array
US10466962B2 (en)*2017-09-292019-11-05Sonos, Inc.Media playback system with voice assistance
KR102443079B1 (en)2017-12-062022-09-14삼성전자주식회사 Electronic device and its control method
US10880650B2 (en)2017-12-102020-12-29Sonos, Inc.Network microphone devices with automatic do not disturb actuation capabilities
US10818290B2 (en)2017-12-112020-10-27Sonos, Inc.Home graph
US11343614B2 (en)2018-01-312022-05-24Sonos, Inc.Device designation of playback and network microphone device arrangements
US10152970B1 (en)*2018-02-082018-12-11Capital One Services, LlcAdversarial learning and generation of dialogue responses
JP7130761B2 (en)*2018-03-072022-09-05グーグル エルエルシー System and method for voice-based activation of custom device actions
US11087752B2 (en)*2018-03-072021-08-10Google LlcSystems and methods for voice-based initiation of custom device actions
US10600408B1 (en)*2018-03-232020-03-24Amazon Technologies, Inc.Content output management based on speech quality
US10818288B2 (en)2018-03-262020-10-27Apple Inc.Natural assistant interaction
US11145294B2 (en)2018-05-072021-10-12Apple Inc.Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en)2018-05-072021-02-23Apple Inc.Raise to speak
US11175880B2 (en)2018-05-102021-11-16Sonos, Inc.Systems and methods for voice-assisted media content selection
US10847178B2 (en)2018-05-182020-11-24Sonos, Inc.Linear filtering for noise-suppressed speech detection
US10959029B2 (en)2018-05-252021-03-23Sonos, Inc.Determining and adapting to changes in microphone performance of playback devices
DK179822B1 (en)2018-06-012019-07-12Apple Inc.Voice interaction at a primary device to access call functionality of a companion device
US10892996B2 (en)2018-06-012021-01-12Apple Inc.Variable latency device coordination
DK201870355A1 (en)2018-06-012019-12-16Apple Inc.Virtual assistant operation in multi-device environments
DK180639B1 (en)2018-06-012021-11-04Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US10681460B2 (en)2018-06-282020-06-09Sonos, Inc.Systems and methods for associating playback devices with voice assistant services
JP7313807B2 (en)*2018-08-242023-07-25キヤノン株式会社 Communication device, its control method, and its program
US11076035B2 (en)2018-08-282021-07-27Sonos, Inc.Do not disturb feature for audio notifications
US10461710B1 (en)2018-08-282019-10-29Sonos, Inc.Media playback system with maximum volume setting
US10587430B1 (en)2018-09-142020-03-10Sonos, Inc.Networked devices, systems, and methods for associating playback devices based on sound codes
US10878811B2 (en)2018-09-142020-12-29Sonos, Inc.Networked devices, systems, and methods for intelligently deactivating wake-word engines
WO2020060311A1 (en)*2018-09-202020-03-26Samsung Electronics Co., Ltd.Electronic device and method for providing or obtaining data for training thereof
US11024331B2 (en)2018-09-212021-06-01Sonos, Inc.Voice detection optimization using sound metadata
US10811015B2 (en)2018-09-252020-10-20Sonos, Inc.Voice detection optimization based on selected voice assistant service
US11010561B2 (en)2018-09-272021-05-18Apple Inc.Sentiment prediction from textual data
US11100923B2 (en)2018-09-282021-08-24Sonos, Inc.Systems and methods for selective wake word detection using neural network models
US10978051B2 (en)*2018-09-282021-04-13Capital One Services, LlcAdversarial learning framework for persona-based dialogue modeling
US11462215B2 (en)2018-09-282022-10-04Apple Inc.Multi-modal inputs for voice commands
US10692518B2 (en)2018-09-292020-06-23Sonos, Inc.Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US11899519B2 (en)2018-10-232024-02-13Sonos, Inc.Multiple stage network microphone device with reduced power consumption and processing load
US11475898B2 (en)2018-10-262022-10-18Apple Inc.Low-latency multi-speaker speech recognition
US20190074013A1 (en)*2018-11-022019-03-07Intel CorporationMethod, device and system to facilitate communication between voice assistants
EP3654249A1 (en)2018-11-152020-05-20SnipsDilated convolutions and gating for efficient keyword spotting
KR102599948B1 (en)*2018-11-162023-11-09삼성전자주식회사ELECTRONIC APPARATUS AND WiFi CONNECTING METHOD THEREOF
WO2020104042A1 (en)*2018-11-232020-05-28Unify Patente Gmbh & Co. KgComputer-implemented method for associating at least two communication terminals with one another and communication terminal
US11183183B2 (en)2018-12-072021-11-23Sonos, Inc.Systems and methods of operating media playback systems having multiple voice assistant services
US11132989B2 (en)2018-12-132021-09-28Sonos, Inc.Networked microphone devices, systems, and methods of localized arbitration
US10602268B1 (en)2018-12-202020-03-24Sonos, Inc.Optimization of network microphone devices using noise classification
US11638059B2 (en)2019-01-042023-04-25Apple Inc.Content playback on multiple devices
US11622267B2 (en)*2019-01-172023-04-04Visa International Service AssociationConducting secure transactions by detecting credential message with audio between first appliance and second appliance
US10867604B2 (en)2019-02-082020-12-15Sonos, Inc.Devices, systems, and methods for distributed voice processing
US11315556B2 (en)2019-02-082022-04-26Sonos, Inc.Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
EP3709194A1 (en)2019-03-152020-09-16Spotify ABEnsemble-based data comparison
US11348573B2 (en)2019-03-182022-05-31Apple Inc.Multimodality in digital assistant systems
US11120794B2 (en)2019-05-032021-09-14Sonos, Inc.Voice assistant persistence across multiple network microphone devices
US11423908B2 (en)2019-05-062022-08-23Apple Inc.Interpreting spoken requests
DK201970509A1 (en)2019-05-062021-01-15Apple IncSpoken notifications
US11475884B2 (en)2019-05-062022-10-18Apple Inc.Reducing digital assistant latency when a language is incorrectly determined
US11307752B2 (en)2019-05-062022-04-19Apple Inc.User configurable task triggers
US11140099B2 (en)2019-05-212021-10-05Apple Inc.Providing message response suggestions
US11289073B2 (en)2019-05-312022-03-29Apple Inc.Device text to speech
US11496600B2 (en)2019-05-312022-11-08Apple Inc.Remote execution of machine-learned models
DK180129B1 (en)2019-05-312020-06-02Apple Inc. USER ACTIVITY SHORTCUT SUGGESTIONS
DK201970510A1 (en)2019-05-312021-02-11Apple IncVoice identification in digital assistant systems
US11468890B2 (en)2019-06-012022-10-11Apple Inc.Methods and user interfaces for voice-based control of electronic devices
US11360641B2 (en)2019-06-012022-06-14Apple Inc.Increasing the relevance of new available information
US10586540B1 (en)2019-06-122020-03-10Sonos, Inc.Network microphone device with command keyword conditioning
US11200894B2 (en)2019-06-122021-12-14Sonos, Inc.Network microphone device with command keyword eventing
US11361756B2 (en)2019-06-122022-06-14Sonos, Inc.Conditional wake word eventing based on environment
US10871943B1 (en)2019-07-312020-12-22Sonos, Inc.Noise classification for event detection
US11138975B2 (en)2019-07-312021-10-05Sonos, Inc.Locally distributed keyword detection
US11138969B2 (en)2019-07-312021-10-05Sonos, Inc.Locally distributed keyword detection
US11094319B2 (en)2019-08-302021-08-17Spotify AbSystems and methods for generating a cleaned version of ambient sound
US11488406B2 (en)2019-09-252022-11-01Apple Inc.Text detection using global geometry estimators
US10965335B1 (en)*2019-09-272021-03-30Apple Inc.Wireless device performance optimization using dynamic power control
US11189286B2 (en)2019-10-222021-11-30Sonos, Inc.VAS toggle based on device orientation
US11200900B2 (en)2019-12-202021-12-14Sonos, Inc.Offline voice control
US11562740B2 (en)2020-01-072023-01-24Sonos, Inc.Voice verification for media playback
US11556307B2 (en)2020-01-312023-01-17Sonos, Inc.Local voice data processing
US11308958B2 (en)2020-02-072022-04-19Sonos, Inc.Localized wakeword verification
US11308959B2 (en)2020-02-112022-04-19Spotify AbDynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices
US11328722B2 (en)*2020-02-112022-05-10Spotify AbSystems and methods for generating a singular voice audio stream
CN111540350B (en)*2020-03-312024-03-01北京小米移动软件有限公司Control method, device and storage medium of intelligent voice control equipment
US12301635B2 (en)2020-05-112025-05-13Apple Inc.Digital assistant hardware abstraction
US11038934B1 (en)2020-05-112021-06-15Apple Inc.Digital assistant hardware abstraction
US11061543B1 (en)2020-05-112021-07-13Apple Inc.Providing relevant data items based on context
US11755276B2 (en)2020-05-122023-09-12Apple Inc.Reducing description length based on confidence
CN111787051B (en)*2020-05-152023-06-27厦门快商通科技股份有限公司File transmission method and system based on voice recognition and mobile terminal
US11308962B2 (en)*2020-05-202022-04-19Sonos, Inc.Input detection windowing
US11727919B2 (en)2020-05-202023-08-15Sonos, Inc.Memory allocation for keyword spotting engines
US11482224B2 (en)2020-05-202022-10-25Sonos, Inc.Command keywords with input detection windowing
US12387716B2 (en)2020-06-082025-08-12Sonos, Inc.Wakewordless voice quickstarts
US11490204B2 (en)2020-07-202022-11-01Apple Inc.Multi-device audio adjustment coordination
US11438683B2 (en)2020-07-212022-09-06Apple Inc.User identification using headphones
CN111933168B (en)*2020-08-172023-10-27齐鲁工业大学Soft loop dynamic echo elimination method based on binder and mobile terminal
US11698771B2 (en)2020-08-252023-07-11Sonos, Inc.Vocal guidance engines for playback devices
US12283269B2 (en)2020-10-162025-04-22Sonos, Inc.Intent inference in audiovisual communication sessions
KR102835502B1 (en)*2020-10-232025-07-17삼성전자주식회사Electronic device and method for recording call thereof
US11984123B2 (en)2020-11-122024-05-14Sonos, Inc.Network device interaction by range
US12007512B2 (en)2020-11-302024-06-11Navico, Inc.Sonar display features
US11551700B2 (en)2021-01-252023-01-10Sonos, Inc.Systems and methods for power-efficient keyword detection
US12114377B2 (en)*2021-03-052024-10-08Samsung Electronics Co., Ltd.Electronic device and method for connecting device thereof
CN115243315B (en)*2021-04-222025-09-05中国移动通信有限公司研究院 Device control method, device and storage medium
US11889569B2 (en)2021-08-092024-01-30International Business Machines CorporationDevice pairing using wireless communication based on voice command context
US12021806B1 (en)2021-09-212024-06-25Apple Inc.Intelligent message delivery
WO2023056026A1 (en)2021-09-302023-04-06Sonos, Inc.Enabling and disabling microphones and voice assistants
EP4564154A3 (en)2021-09-302025-07-23Sonos Inc.Conflict management for wake-word detection processes
CN118648058A (en)*2022-02-022024-09-13谷歌有限责任公司 Speech recognition using word or phoneme time-stamping based on user input
US12327549B2 (en)2022-02-092025-06-10Sonos, Inc.Gatekeeping for voice intent processing
KR102670725B1 (en)*2023-09-272024-05-30주식회사 씨와이디정보기술a speech-to-text conversion device connected to multiple counterpart devices and method therefor
CN117609965B (en)*2024-01-192024-06-25深圳前海深蕾半导体有限公司Upgrade data packet acquisition method of intelligent device, intelligent device and storage medium

Citations (54)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010039619A1 (en)*2000-02-032001-11-08Martine LapereSpeaker verification interface for secure transactions
US6339706B1 (en)1999-11-122002-01-15Telefonaktiebolaget L M Ericsson (Publ)Wireless voice-activated remote control device
US20020077095A1 (en)2000-10-132002-06-20International Business Machines CorporationSpeech enabled wireless device management and an access platform and related control methods thereof
US20030208356A1 (en)2002-05-022003-11-06International Business Machines CorporationComputer network including a computer system transmitting screen image information and corresponding speech information to another computer system
US20040022374A1 (en)*2000-08-092004-02-05Delphine CharletMethod of identifying a caller with a telephone service operator
US20040209569A1 (en)2003-04-162004-10-21Tomi HeinonenShort-range radio terminal adapted for data streaming and real time services
US20040243562A1 (en)2001-06-282004-12-02Michael JosenhansMethod for searching data in at least two databases
US20040248513A1 (en)2003-06-032004-12-09Glass Andrew C.Capacitive bonding of devices
US20050010417A1 (en)2003-07-112005-01-13Holmes David W.Simplified wireless device pairing
US20050090232A1 (en)*2002-06-202005-04-28Hsu Raymond T.Authentication in a communication system
US20050182631A1 (en)2004-02-132005-08-18In-Seok LeeVoice message recording and playing method using voice recognition
US6957185B1 (en)*1999-02-252005-10-18Enco-Tone, Ltd.Method and apparatus for the secure identification of the owner of a portable device
KR20060079660A (en)2005-01-032006-07-06주식회사 케이티프리텔 Method and apparatus for providing counterpart information display service using voice recording
US20060185007A1 (en)2005-02-142006-08-17International Business Machines CorporationSecure authentication of service users of a remote service interface to a storage media
US20060220784A1 (en)1994-09-222006-10-05Intuitive Surgical, Inc., A Delaware CorporationGeneral purpose distributed operating room control system
US20060282649A1 (en)2005-06-102006-12-14Malamud Mark ADevice pairing via voice commands
US20070100634A1 (en)*2001-02-162007-05-03International Business Machines CorporationTracking Time Using Portable Recorders and Speech Recognition
US20070168674A1 (en)*2003-12-092007-07-19Masao NonakaAuthentication system, authentication apparatus, and recording medium
US7260529B1 (en)2002-06-252007-08-21Lengen Nicholas DCommand insertion system and method for voice recognition applications
CN101030994A (en)2007-04-112007-09-05华为技术有限公司Speech discriminating method system and server
US20080016537A1 (en)2006-07-172008-01-17Research In Motion LimitedManagement of multiple connections to a security token access device
US20080162141A1 (en)2006-12-282008-07-03Lortz Victor BVoice interface to NFC applications
KR20090044093A (en)2007-10-312009-05-07에스케이 텔레콤주식회사 Device collaboration method and system
US20090176505A1 (en)2007-12-212009-07-09Koninklijke Kpn N.V.Identification of proximate mobile devices
US20090248838A1 (en)*2008-04-012009-10-01Disney Enterprises, Inc.Method and system for pairing a medium to a user account
CN101599270A (en)2008-06-022009-12-09海尔集团公司Voice server and voice control method
US20090310762A1 (en)2008-06-142009-12-17George Alfred VeliusSystem and method for instant voice-activated communications using advanced telephones and data networks
US20100110837A1 (en)*2008-10-312010-05-06Samsung Electronics Co., Ltd.Method and apparatus for wireless communication using an acoustic signal
US20100210287A1 (en)2007-07-202010-08-19Koninklijke Kpn N.V.Identification of proximate mobile devices
US7826945B2 (en)2005-07-012010-11-02You ZhangAutomobile speech-recognition interface
US20100279612A1 (en)2003-12-222010-11-04Lear CorporationMethod of Pairing a Portable Device with a Communications Module of a Vehicular, Hands-Free Telephone System
US20100286983A1 (en)2009-05-072010-11-11Chung Bum ChoOperation control apparatus and method in multi-voice recognition system
US20100330909A1 (en)2009-06-252010-12-30Blueant Wireless Pty LimitedVoice-enabled walk-through pairing of telecommunications devices
US20100332236A1 (en)2009-06-252010-12-30Blueant Wireless Pty LimitedVoice-triggered operation of electronic devices
US20110003585A1 (en)2009-07-062011-01-06T-Mobile Usa, Inc.Communication mode swapping for telecommunications devices
US20110044438A1 (en)2009-08-202011-02-24T-Mobile Usa, Inc.Shareable Applications On Telecommunications Devices
US20110074693A1 (en)2009-09-252011-03-31Paul RanfordMethod of processing touch commands and voice commands in parallel in an electronic device supporting speech recognition
US20110111741A1 (en)2009-11-062011-05-12Kirstin ConnorsAudio-Only User Interface Mobile Phone Pairing
US20110217950A1 (en)*2010-03-052011-09-08Alan KozlayApparatus & method to improve pairing security in Bluetooth™ headsets & earbuds
US20110314153A1 (en)*2010-06-222011-12-22Microsoft CorporationNetworked device authentication, pairing and resource sharing
US8099289B2 (en)2008-02-132012-01-17Sensory, Inc.Voice interface and search for electronic devices including bluetooth headsets and remote systems
US20120084834A1 (en)*2010-10-012012-04-05At&T Intellectual Property I, L.P.System for communicating with a mobile device server
US8219028B1 (en)2008-03-312012-07-10Google Inc.Passing information between mobile devices
US20120184372A1 (en)2009-07-232012-07-19Nederlandse Organisatie Voor Toegepastnatuurweten- Schappelijk Onderzoek TnoEvent disambiguation
US20120233644A1 (en)2007-06-052012-09-13Bindu Rama RaoMobile device capable of substantially synchronized sharing of streaming media with other devices
US20120260268A1 (en)2011-04-112012-10-11Telenav, Inc.Navigation system with conditional based application sharing mechanism and method of operation thereof
US20130067288A1 (en)2011-09-092013-03-14Microsoft CorporationCooperative Client and Server Logging
US8463182B2 (en)2009-12-242013-06-11Sony Computer Entertainment Inc.Wireless device pairing and grouping methods
US8538333B2 (en)2011-12-162013-09-17Arbitron Inc.Media exposure linking utilizing bluetooth signal characteristics
US20130337739A1 (en)2011-03-012013-12-19Koninklijke Philips N.V.Method for enabling a wireless secured communication among devices
US8750473B2 (en)*2008-09-032014-06-10Smule, Inc.System and method for communication between mobile devices using digital/acoustic techniques
US8902785B2 (en)*2011-12-212014-12-02Ntt Docomo, Inc.Method, apparatus and system for finding and selecting partners
US9082413B2 (en)2012-11-022015-07-14International Business Machines CorporationElectronic transaction authentication based on sound proximity
US9805733B2 (en)*2012-07-032017-10-31Samsung Electronics Co., LtdMethod and apparatus for connecting service between user devices using voice

Patent Citations (61)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20060220784A1 (en)1994-09-222006-10-05Intuitive Surgical, Inc., A Delaware CorporationGeneral purpose distributed operating room control system
US6957185B1 (en)*1999-02-252005-10-18Enco-Tone, Ltd.Method and apparatus for the secure identification of the owner of a portable device
US7565297B2 (en)1999-02-252009-07-21Cidway Technologies LtdMethod and apparatus for the secure identification of the owner of a portable device
US6339706B1 (en)1999-11-122002-01-15Telefonaktiebolaget L M Ericsson (Publ)Wireless voice-activated remote control device
US20010039619A1 (en)*2000-02-032001-11-08Martine LapereSpeaker verification interface for secure transactions
US20040022374A1 (en)*2000-08-092004-02-05Delphine CharletMethod of identifying a caller with a telephone service operator
US20020077095A1 (en)2000-10-132002-06-20International Business Machines CorporationSpeech enabled wireless device management and an access platform and related control methods thereof
US20070100634A1 (en)*2001-02-162007-05-03International Business Machines CorporationTracking Time Using Portable Recorders and Speech Recognition
US20040243562A1 (en)2001-06-282004-12-02Michael JosenhansMethod for searching data in at least two databases
US20030208356A1 (en)2002-05-022003-11-06International Business Machines CorporationComputer network including a computer system transmitting screen image information and corresponding speech information to another computer system
US20050090232A1 (en)*2002-06-202005-04-28Hsu Raymond T.Authentication in a communication system
US7260529B1 (en)2002-06-252007-08-21Lengen Nicholas DCommand insertion system and method for voice recognition applications
US20040209569A1 (en)2003-04-162004-10-21Tomi HeinonenShort-range radio terminal adapted for data streaming and real time services
US20040248513A1 (en)2003-06-032004-12-09Glass Andrew C.Capacitive bonding of devices
US20050010417A1 (en)2003-07-112005-01-13Holmes David W.Simplified wireless device pairing
US20070168674A1 (en)*2003-12-092007-07-19Masao NonakaAuthentication system, authentication apparatus, and recording medium
US20100279612A1 (en)2003-12-222010-11-04Lear CorporationMethod of Pairing a Portable Device with a Communications Module of a Vehicular, Hands-Free Telephone System
US20050182631A1 (en)2004-02-132005-08-18In-Seok LeeVoice message recording and playing method using voice recognition
KR20060079660A (en)2005-01-032006-07-06주식회사 케이티프리텔 Method and apparatus for providing counterpart information display service using voice recording
US20060185007A1 (en)2005-02-142006-08-17International Business Machines CorporationSecure authentication of service users of a remote service interface to a storage media
US8699944B2 (en)*2005-06-102014-04-15The Invention Science Fund I, LlcDevice pairing using device generated sound
US20060282649A1 (en)2005-06-102006-12-14Malamud Mark ADevice pairing via voice commands
US7826945B2 (en)2005-07-012010-11-02You ZhangAutomobile speech-recognition interface
US20080016537A1 (en)2006-07-172008-01-17Research In Motion LimitedManagement of multiple connections to a security token access device
US20080162141A1 (en)2006-12-282008-07-03Lortz Victor BVoice interface to NFC applications
CN101241537A (en)2006-12-282008-08-13英特尔公司Voice interface for NFC applications
CN101030994A (en)2007-04-112007-09-05华为技术有限公司Speech discriminating method system and server
US20080255848A1 (en)2007-04-112008-10-16Huawei Technologies Co., Ltd.Speech Recognition Method and System and Speech Recognition Server
US20120233644A1 (en)2007-06-052012-09-13Bindu Rama RaoMobile device capable of substantially synchronized sharing of streaming media with other devices
US20100210287A1 (en)2007-07-202010-08-19Koninklijke Kpn N.V.Identification of proximate mobile devices
KR20090044093A (en)2007-10-312009-05-07에스케이 텔레콤주식회사 Device collaboration method and system
US20090176505A1 (en)2007-12-212009-07-09Koninklijke Kpn N.V.Identification of proximate mobile devices
US8099289B2 (en)2008-02-132012-01-17Sensory, Inc.Voice interface and search for electronic devices including bluetooth headsets and remote systems
US8195467B2 (en)2008-02-132012-06-05Sensory, IncorporatedVoice interface and search for electronic devices including bluetooth headsets and remote systems
US8219028B1 (en)2008-03-312012-07-10Google Inc.Passing information between mobile devices
US20090248838A1 (en)*2008-04-012009-10-01Disney Enterprises, Inc.Method and system for pairing a medium to a user account
CN101599270A (en)2008-06-022009-12-09海尔集团公司Voice server and voice control method
US20090310762A1 (en)2008-06-142009-12-17George Alfred VeliusSystem and method for instant voice-activated communications using advanced telephones and data networks
US8750473B2 (en)*2008-09-032014-06-10Smule, Inc.System and method for communication between mobile devices using digital/acoustic techniques
US20100110837A1 (en)*2008-10-312010-05-06Samsung Electronics Co., Ltd.Method and apparatus for wireless communication using an acoustic signal
US20100286983A1 (en)2009-05-072010-11-11Chung Bum ChoOperation control apparatus and method in multi-voice recognition system
US20100332236A1 (en)2009-06-252010-12-30Blueant Wireless Pty LimitedVoice-triggered operation of electronic devices
US20100330909A1 (en)2009-06-252010-12-30Blueant Wireless Pty LimitedVoice-enabled walk-through pairing of telecommunications devices
CN102483915A (en)2009-06-252012-05-30蓝蚁无线股份有限公司Telecommunications device with voice-controlled functionality including walk-through pairing and voice-triggered operation
US8416767B2 (en)2009-07-062013-04-09T-Mobile Usa, Inc.Communication mode swapping for telecommunications devices
US20110003585A1 (en)2009-07-062011-01-06T-Mobile Usa, Inc.Communication mode swapping for telecommunications devices
US20120184372A1 (en)2009-07-232012-07-19Nederlandse Organisatie Voor Toegepastnatuurweten- Schappelijk Onderzoek TnoEvent disambiguation
US20110044438A1 (en)2009-08-202011-02-24T-Mobile Usa, Inc.Shareable Applications On Telecommunications Devices
US20110074693A1 (en)2009-09-252011-03-31Paul RanfordMethod of processing touch commands and voice commands in parallel in an electronic device supporting speech recognition
US20110111741A1 (en)2009-11-062011-05-12Kirstin ConnorsAudio-Only User Interface Mobile Phone Pairing
US8463182B2 (en)2009-12-242013-06-11Sony Computer Entertainment Inc.Wireless device pairing and grouping methods
US20110217950A1 (en)*2010-03-052011-09-08Alan KozlayApparatus & method to improve pairing security in Bluetooth™ headsets & earbuds
US20110314153A1 (en)*2010-06-222011-12-22Microsoft CorporationNetworked device authentication, pairing and resource sharing
US20120084834A1 (en)*2010-10-012012-04-05At&T Intellectual Property I, L.P.System for communicating with a mobile device server
US20130337739A1 (en)2011-03-012013-12-19Koninklijke Philips N.V.Method for enabling a wireless secured communication among devices
US20120260268A1 (en)2011-04-112012-10-11Telenav, Inc.Navigation system with conditional based application sharing mechanism and method of operation thereof
US20130067288A1 (en)2011-09-092013-03-14Microsoft CorporationCooperative Client and Server Logging
US8538333B2 (en)2011-12-162013-09-17Arbitron Inc.Media exposure linking utilizing bluetooth signal characteristics
US8902785B2 (en)*2011-12-212014-12-02Ntt Docomo, Inc.Method, apparatus and system for finding and selecting partners
US9805733B2 (en)*2012-07-032017-10-31Samsung Electronics Co., LtdMethod and apparatus for connecting service between user devices using voice
US9082413B2 (en)2012-11-022015-07-14International Business Machines CorporationElectronic transaction authentication based on sound proximity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chinese Office Action dated Aug. 15, 2017 issued in counterpart application No. 201380045946.3, 20 pages.
European Search Report dated May 8, 2019 issued in counterpart application No. 19152424.8-1216, 8 pages.

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180247655A1 (en)*2014-01-102018-08-30Verizon Patent And Licensing Inc.Personal assistant application
US10692505B2 (en)*2014-01-102020-06-23Cellco PartnershipPersonal assistant application
US11019427B2 (en)2014-09-012021-05-25Samsung Electronics Co., Ltd.Electronic device including a microphone array
US11871188B2 (en)2014-09-012024-01-09Samsung Electronics Co., Ltd.Electronic device including a microphone array
US20190287513A1 (en)*2018-03-152019-09-19Motorola Mobility LlcElectronic Device with Voice-Synthesis and Corresponding Methods
US10755695B2 (en)2018-03-152020-08-25Motorola Mobility LlcMethods in electronic devices with voice-synthesis and acoustic watermark capabilities
US10755694B2 (en)*2018-03-152020-08-25Motorola Mobility LlcElectronic device with voice-synthesis and acoustic watermark capabilities
US11594219B2 (en)2021-02-052023-02-28The Toronto-Dominion BankMethod and system for completing an operation

Also Published As

Publication numberPublication date
WO2014007545A1 (en)2014-01-09
KR20140005410A (en)2014-01-15
US9805733B2 (en)2017-10-31
EP3493513A1 (en)2019-06-05
CN104604274A (en)2015-05-06
US20180047406A1 (en)2018-02-15
US20140012587A1 (en)2014-01-09
CN104604274B (en)2018-11-20
EP2683147B1 (en)2019-02-27
KR101972955B1 (en)2019-04-26
EP2683147A1 (en)2014-01-08

Similar Documents

PublicationPublication DateTitle
US10475464B2 (en)Method and apparatus for connecting service between user devices using voice
US11670302B2 (en)Voice processing method and electronic device supporting the same
US11410640B2 (en)Method and user device for providing context awareness service using speech recognition
US10964300B2 (en)Audio signal processing method and apparatus, and storage medium thereof
CN107210033B (en)Updating language understanding classifier models for digital personal assistants based on crowd sourcing
KR102369605B1 (en)Scaling digital personal assistant agents across devices
CN112970059B (en) Electronic device for processing user speech and control method thereof
WO2022052776A1 (en)Human-computer interaction method, and electronic device and system
KR102301880B1 (en)Electronic apparatus and method for spoken dialog thereof
US20160202957A1 (en)Reactive agent development environment
US20200051560A1 (en)System for processing user voice utterance and method for operating same
US20190004673A1 (en)Method for controlling display and electronic device supporting the same
KR20170103801A (en)Headless task completion within digital personal assistants
US20200258517A1 (en)Electronic device for providing graphic data based on voice and operating method thereof
US11170764B2 (en)Electronic device for processing user utterance
JP2014049140A (en)Method and apparatus for providing intelligent service using input characters in user device
TW201512987A (en)Method and device for controlling start-up of application and computer-readable storage medium
WO2022199596A1 (en)Intention decision-making method and device, and computer-readable storage medium
US20150089370A1 (en)Method and device for playing media data on a terminal
KR20150107066A (en)Messenger service system, method and apparatus for messenger service using common word in the system
HK1241551A1 (en)Updating language understanding classifier models for a digital personal assistant based on crowd-sourcing

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp