Movatterモバイル変換


[0]ホーム

URL:


US11222625B2 - Systems and methods for training devices to recognize sound patterns - Google Patents

Systems and methods for training devices to recognize sound patterns
Download PDF

Info

Publication number
US11222625B2
US11222625B2US16/384,397US201916384397AUS11222625B2US 11222625 B2US11222625 B2US 11222625B2US 201916384397 AUS201916384397 AUS 201916384397AUS 11222625 B2US11222625 B2US 11222625B2
Authority
US
United States
Prior art keywords
audio
control panel
ambient audio
initial
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/384,397
Other versions
US20200327885A1 (en
Inventor
Pradyumna Sampath
Ramprasad Yelchuru
Purnaprajna R. Mangsuli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Resideo LLC
Original Assignee
Ademco Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ademco IncfiledCriticalAdemco Inc
Priority to US16/384,397priorityCriticalpatent/US11222625B2/en
Assigned to ADEMCO INC.reassignmentADEMCO INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: SAMPATH, PRADYUMNA, YELCHURU, RAMPRASAD, MANGSULI, PURNAPRAJNA R.
Publication of US20200327885A1publicationCriticalpatent/US20200327885A1/en
Assigned to JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENTreassignmentJPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENTSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: ADEMCO INC., RSI VIDEO TECHNOLOGIES, LLC
Application grantedgrantedCritical
Publication of US11222625B2publicationCriticalpatent/US11222625B2/en
Assigned to RESIDEO LLCreassignmentRESIDEO LLCCHANGE OF NAME (SEE DOCUMENT FOR DETAILS).Assignors: ADEMCO INC.
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Systems and methods for training a control panel to recognize user defined and preprogrammed sound patterns are provided. Such systems and methods can include the control panel operating in a learning mode, receiving initial ambient audio from a region, and saving the initial ambient audio as an audio pattern in a memory device of the control panel. Such systems and methods can also include the control panel operating in an active mode, receiving subsequent ambient audio from the region, using an audio classification model to make an initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, determining whether the initial determination is correct, and when the control panel determines that the initial determination is incorrect, modifying or updating the audio classification model for improving the accuracy in detecting future consistency with the audio pattern.

Description

FIELD
The present invention relates generally to security systems and home automation systems. More particularly, the present invention relates to systems and methods for training devices in a security system or a home automation system to recognize preprogrammed or user customized sound patterns.
BACKGROUND
Known security systems and home automation systems can recognize ambient sound patterns that match preprogrammed sound patterns identified during manufacture or installation of devices in the security systems or the home automation systems. However, such known security systems and home automation systems cannot be trained to recognize user customized sound patterns that differ from the preprogrammed sound patterns. Additionally, the accuracy of audio classification models programmed during the manufacture or the installation of known security systems and home automation systems can be inaccurate because such known systems do not account for the variability of noise characteristics of deployed environments.
In view of the above, there is a need and an opportunity for improved systems and methods.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a system in accordance with disclosed embodiments;
FIG. 2 is a block diagram of a control panel in accordance with disclosed embodiments; and
FIG. 3 is a flow diagram of a method in accordance with disclosed embodiments.
DETAILED DESCRIPTION
While this invention is susceptible of an embodiment in many different forms, specific embodiments thereof will be described herein in detail with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention. It is not intended to limit the invention to the specific illustrated embodiments.
Embodiments disclosed herein can include systems and methods for training devices in a security system or a home automation system to recognize sound patterns. For example, such security systems or home automation systems can include a control panel or other similar device that can operate in a learning mode, during which the control panel can learn the sound patterns, and in an active mode, during which the control panel can identify the sound patterns and be trained for identifying the sound patterns at future times.
When the control panel is operating in the learning mode, the control panel can receive initial ambient audio from a region in which the devices in the security system or the home automation system are deployed. Then, the control panel can save the initial ambient audio as an audio pattern in a memory device of the control panel. In some embodiments, the initial ambient audio can include customized audio relevant to a user of the security system or the home automation system that is different than preprogrammed or preconfigured audio saved in the memory device during manufacture or installation of the control panel. For example, in some embodiments, the initial ambient audio can include live audio of the user's dog barking or the user's baby crying. Additionally, or alternatively, in some embodiments, the initial ambient audio can include, but is not limited to audio of a drill being used, a siren wailing, a phone ringing, a vacuum running, the user coughing, a door being operated, water running, knocking on a door, a microwave running, an electric shaver running, teeth brushing, a blender running, a dishwasher running, a doorbell ringing, a toilet flushing, a hairdryer running, the user laughing, the user snoring, the user typing, the user hammering, a car honking, a vehicle running, a saw being used, a cat meowing, an alarm clock activating, and kitchen items being used.
In some embodiments, the control panel can receive user input with the initial ambient audio that describes the initial ambient audio. In these embodiments, the control panel can use such a description of the initial ambient audio to identify user actions or non-actions that are consistent with a presence of the initial ambient audio in the region and/or user actions or non-actions that are inconsistent with the presence of the initial ambient audio in the region. Additionally or alternatively, in some embodiments, the control panel can receive user input with the initial ambient audio that directly identifies the user actions or non-actions that are consistent with the presence of the initial ambient audio in the region and/or the user actions or non-actions that are inconsistent with the presence of the initial ambient audio in the region.
In any embodiment, after operating in the learning mode, the control panel can transition to the active mode, and when operating in the active mode, the control panel can receive subsequent ambient audio from the region. Then, the control panel can use an audio classification model (e.g. audio processing rules) to make an initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, such as whether the subsequent ambient audio matches or is otherwise consistent with the user's dog barking or the user's baby crying.
In some embodiments, after initially determining whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the control panel can determine whether the initial determination is correct. For example, in some embodiments, the control panel can identify and evaluate implicit and/or explicit feedback to determine whether the initial determination is correct.
In some embodiments, when the control panel determines that the initial determination is correct, the control panel can initiate one or more actions, such as activating a camera for monitoring the region or activating an alarm in the region, or can transmit one or more signals to initiate the same. In some embodiments, a type of the one or more other actions initiated by the control panel can be dependent on a severity and a nature of a type of event associated with the audio pattern.
However, in some embodiments, when the control panel determines that the initial determination is incorrect, the control panel can modify or update the audio classification model for accuracy in detecting future consistency with the audio pattern. For example, when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, but subsequently determines that the initial determination is incorrect, that is, the initial determination was a false positive, the control panel can modify or update the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as being inconsistent with the audio pattern. Similarly, in some embodiments, when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but subsequently determines that the initial determination is incorrect, that is, the initial determination was a false negative, the control panel can modify or update the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as being consistent with the audio pattern at future times.
In some embodiments, a format and a composition of the audio pattern can be dependent on a type of the audio classification model being used to make the initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern. For example, in some embodiments, the audio classification model can include one or more type of a supervised machine learning model, such as a Gauss Naïve Bayes classifier, a support vector machine classifier, and a Fischer's linear discriminant analysis.
When the Gauss Naïve Bayes classifier type model is used to make the initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the audio pattern can include a derived mean and variance of audio features of the initial ambient audio. For example, the audio features can include descriptive statistics of a window of audio data, a functional approximation of the window of audio data, or any characterization of the window of audio data, and in some embodiments, a size of the window of audio data can be dependent on a length of the initial ambient audio and can be, for example, 1 sec, 10 sec, or 1 min. During the active mode, a probability the audio features of the subsequent ambient audio belonging within a normal distribution of the derived mean and variance of the audio features of the initial ambient audio can be calculated. When the probability is larger than a preset threshold, for example, 75%, the subsequent ambient audio can be identified as matching or being otherwise consistent with the audio pattern.
However, when the support vector machine classifier type model is used to make the initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the audio pattern can include a hyperplane/kernel that can differentiate the audio features of the initial ambient audio into a unique event as compared with the audio features of background noise and other custom or preconfigured events. During the active mode, the audio features of the subsequent ambient audio can be evaluated with the hyperplane/kernel in an attempt to assign the audio features of the subsequent ambient audio to the unique event. When the audio features of the subsequent ambient audio can be assigned to the unique event, the subsequent ambient audio can be identified as matching or being otherwise consistent with the audio pattern.
However, when the Fischer's linear discriminant analysis model is used to make the initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the audio pattern can include a linear hyperplane that that can differentiate the audio features of the initial ambient audio into a unique event as compared with the audio features of background noise and other custom or preconfigured events. For example, the linear hyperplane can compresses clusters of the audio features of the initial ambient audio that correspond to the unique event and locate those clusters far apart from the clusters of the audio features of the background noise and other custom or preconfigured events to enable best possible discrimination among all events. During the active mode, the audio features of the subsequent ambient audio can be projected on to the liner hyperplane in an attempt to assign the audio features of the subsequent ambient audio to the unique event. When the audio features of the subsequent ambient audio can be assigned to the unique event, the subsequent ambient audio can be identified as matching or being otherwise consistent with the audio pattern.
As disclosed and described herein, the control panel can use the explicit feedback to determine whether the initial determination is correct. For example, in some embodiments, when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the control panel can transmit a notification message indicative of the initial determination to a notification device.
Then, responsive to the notification message, the control panel can receive the explicit feedback from the user directly or via the notification device, and the explicit feedback can explicitly confirm or deny that the subsequent ambient audio matches or is otherwise consistent with the audio pattern. For example, the explicit feedback can include user input confirming or denying that the user's dog is barking or the user's baby is crying. In some embodiments, when the explicit feedback denies that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, that is, identifies a false positive, the control panel can determine that the initial determination is incorrect and can modify or update the audio classification model as described herein.
Similarly, when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but receives the explicit feedback denying that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, that is, identifies a false negative, the control panel can determine that the initial determination is incorrect and can modify or update the audio classification model as described herein. In this regard, the control panel can receive the explicit feedback even without transmitting the notification message to the notification device.
As further disclosed and described herein, the control panel can use the implicit feedback to determine whether the initial determination is correct. For example, the implicit feedback can include detecting an occurrence or non-occurrence of one or more of the user actions or non-actions that would be expected when the user's dog is barking or the user's baby is crying, such as opening a door to or movement towards a room in which the dog or the baby is located.
In some embodiments, when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, but subsequently detects one or more of the user actions or non-actions that are inconsistent with the presence of the initial ambient audio in the region, that is, identifies a false positive, the control panel can determine that the initial determination is incorrect and can modify or update the audio classification model as described herein. Similarly, when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but subsequently detects one or more of the user actions or non-actions that are consistent with the presence of the initial ambient audio in the region, that is, identifies a false negative, the control panel can determine that the initial determination is incorrect and can modify or update the audio classification model as described herein.
FIG. 1 is a block diagram of asystem20 in accordance with disclosed embodiments. As seen inFIG. 1, thesystem20 can include acontrol panel22, anotification device24, and a plurality of connecteddevices26, such as sensors, microphones, cameras, and the like, deployed in a region R. As further seen inFIG. 1, thenotification device24 and the plurality of connecteddevices26 can communicate with thecontrol panel22 via known wired and/or wireless mediums.
FIG. 2 is a block diagram of thecontrol panel22 in accordance with disclosed embodiments. As seen inFIG. 2, thecontrol panel22 can include aprogrammable processor28, amemory device30, amicrophone32, and acommunication interface34.
FIG. 3 is a flow diagram of amethod100 in accordance with disclosed embodiments. As seen inFIG. 3, themethod100 can include theprogrammable processor28 receiving initial ambient audio from the region R via themicrophone32 while in a learning mode, as in102, and saving the initial ambient audio as an audio pattern in thememory device30, as in104. Then, themethod100 can include theprogrammable processor28 transitioning to an active mode and receiving subsequent ambient audio from the region R via themicrophone32 while in the active mode, as in106.
Responsive to receiving the subsequent ambient audio as in106, themethod100 can include theprogrammable processor28 using audio classification model stored in thememory device30 to make an initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern, as in108, identifying implicit and/or explicit feedback, and determining whether the implicit and/or explicit feedback indicates whether the initial determination made as in108 is correct, as in110. In some embodiments, theprogrammable processor28 can receive the explicit feedback from thecommunication interface34, and thecommunication interface34 can receive the explicit feedback directly from a user or from thenotification device24 receiving the explicit feedback directly from the user. Additionally or alternatively, in some embodiments, theprogrammable processor28 can receive the implicit feedback from thecommunication interface34, and thecommunication interface34 can receive the implicit feedback from one or more of the plurality ofconnected devices26. For example, the implicit feedback can include information about the region R or activity therein detected by the one or more of the plurality ofconnected devices26.
When theprogrammable processor28 determines that initial determination made as in108 is correct as in110, themethod100 can include theprogrammable processor28 taking no action, as in112. However, when theprogrammable processor28 determines that the initial determination made as in108 is incorrect as in110, themethod100 can include theprogrammable processor28 modifying or updating the audio classification model for accuracy in detecting future consistency with the audio pattern, as in114.
Although a few embodiments have been described in detail above, other modifications are possible. For example, the logic flows described above do not require the particular order described or sequential order to achieve desirable results. Other steps may be provided, steps may be eliminated from the described flows, and other components may be added to or removed from the described systems. Other embodiments may be within the scope of the invention.
From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the invention. It is to be understood that no limitation with respect to the specific system or method described herein is intended or should be inferred. It is, of course, intended to cover all such modifications as fall within the spirit and scope of the invention.

Claims (14)

What is claimed is:
1. A method comprising:
a control panel operating in a learning mode receiving initial ambient audio from a region;
the control panel receiving user input that includes a description of the initial ambient audio;
the control panel saving the initial ambient audio as an audio pattern and the description of the initial ambient audio in a memory device of the control panel;
the control panel operating in an active mode receiving subsequent ambient audio from the region;
the control panel using audio classification model to make an initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern;
the control panel using the description of the initial ambient audio to identify a user action or non-action in the region, the user action or non-action in the region comprising at least one of a first user action or non-action and second user action or non-action;
the control panel identifying the first user action or non-action as consistent with a presence of the initial ambient audio in the region and the control panel identifying the second user action or non-action as inconsistent with the presence of the initial ambient audio in the region;
when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern but detects the second user action or non-action, the control panel determining that the initial determination is incorrect;
when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern but detects the first user action or non-action, the control panel determining that the initial determination is incorrect; and
when the control panel determines that the initial determination is incorrect, the control panel modifying or updating the audio classification model for accuracy in detecting future consistency with the audio pattern.
2. The method ofclaim 1 further comprising:
when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the control panel transmitting a notification message to a notification device; and
responsive to the notification message, the control panel receiving user input confirming or denying the subsequent ambient audio matching or otherwise being consistent with the audio pattern.
3. The method ofclaim 2 further comprising:
when the user input denies that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the control panel determining that the initial determination is incorrect.
4. The method ofclaim 1 further comprising:
when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but receives user input indicating that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the control panel determining that the initial determination is incorrect.
5. The method ofclaim 1 wherein the initial ambient audio is different from preprogrammed audio saved in the memory device during manufacture of the control panel.
6. The method ofclaim 1 further comprising:
when the control panel initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, but that the initial determination is incorrect, the control panel modifying or updating the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as failing to match or otherwise being inconsistent with the audio pattern.
7. The method ofclaim 1 further comprising:
when the control panel initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but that the initial determination is incorrect, the control panel modifying or updating the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as matching or otherwise being consistent with the audio pattern.
8. A system comprising:
a programmable processor;
a memory device coupled to the programmable processor; and
a microphone coupled to the programmable processor,
wherein, when operating in a learning mode, the programmable processor receives initial ambient audio from a region via the microphone, ii) receives user input that includes a description of the initial ambient audio, and iii) saves the initial ambient audio as an audio pattern and the description of the initial ambient audio in the memory device,
wherein, when operating in an active mode, the programmable processor receives subsequent ambient audio from the region via the microphone and uses audio classification model stored in the memory device to make an initial determination as to whether the subsequent ambient audio matches or is otherwise consistent with the audio pattern,
wherein, when operating in the active mode, the programmable processor uses the description of the initial ambient audio to identify a user action or non-action in the region, the user action or non-action in the region comprising at least one of first user action or non-action and second user action or non-action,
wherein the programmable processor identifies the first user action or non-action as consistent with a presence of the initial ambient audio in the region and the programmable processor identifies the second user action or non-action as inconsistent with the presence of the initial ambient audio in the region,
wherein when the programmable processor initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern but detects the second user action or non-action, the programmable processor determines that the initial determination is incorrect;
when the programmable processor initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern but detects the first user action or non-action, the programmable processor determines that the initial determination is incorrect; and
wherein, when the programmable processor determines that the initial determination is incorrect, the programmable processor modifies or updates the audio classification model for accuracy in detecting future consistency with the audio pattern.
9. The system ofclaim 8 wherein, when the programmable processor initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the programmable processor transmits a notification message to a notification device, and wherein, responsive to the notification message, the programmable processor receives user input confirming or denying the subsequent ambient audio matching or otherwise being consistent with the audio pattern.
10. The system ofclaim 9 wherein, when the user input denies that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the programmable processor determines that the initial determination is incorrect.
11. The system ofclaim 8 wherein, when the programmable processor initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but receives user input indicating that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, the programmable processor determines that the initial determination is incorrect.
12. The system ofclaim 8 wherein the initial ambient audio is different from preprogrammed audio saved in the memory device during manufacture.
13. The system ofclaim 8 wherein, when the programmable processor initially determines that the subsequent ambient audio matches or is otherwise consistent with the audio pattern, but that the initial determination is incorrect, the programmable processor modifies or updates the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as failing to match or otherwise being inconsistent with the audio pattern.
14. The system ofclaim 8 wherein, when the programmable processor initially determines that the subsequent ambient audio fails to match or is otherwise inconsistent with the audio pattern, but that the initial determination is incorrect, the programmable processor modifies or updates the audio classification model for the accuracy in detecting the future consistency with the audio pattern so that the audio classification model as modified or updated would identify the subsequent ambient audio as matching or otherwise being consistent with the audio pattern.
US16/384,3972019-04-152019-04-15Systems and methods for training devices to recognize sound patternsActive2039-08-16US11222625B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US16/384,397US11222625B2 (en)2019-04-152019-04-15Systems and methods for training devices to recognize sound patterns

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US16/384,397US11222625B2 (en)2019-04-152019-04-15Systems and methods for training devices to recognize sound patterns

Publications (2)

Publication NumberPublication Date
US20200327885A1 US20200327885A1 (en)2020-10-15
US11222625B2true US11222625B2 (en)2022-01-11

Family

ID=72749214

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US16/384,397Active2039-08-16US11222625B2 (en)2019-04-152019-04-15Systems and methods for training devices to recognize sound patterns

Country Status (1)

CountryLink
US (1)US11222625B2 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6009392A (en)1998-01-151999-12-28International Business Machines CorporationTraining speech recognition by matching audio segment frequency of occurrence with frequency of words and letter combinations in a corpus
US20050071159A1 (en)*2003-09-262005-03-31Robert BomanSpeech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20070078649A1 (en)*2003-02-212007-04-05Hetherington Phillip ASignature noise removal
US20080159560A1 (en)*2006-12-302008-07-03Motorola, Inc.Method and Noise Suppression Circuit Incorporating a Plurality of Noise Suppression Techniques
US20110289224A1 (en)*2009-01-302011-11-24Mitchell TrottMethods and systems for establishing collaborative communications between devices using ambient audio
US20150112678A1 (en)*2008-12-152015-04-23Audio Analytic LtdSound capturing and identifying devices
US20150271654A1 (en)*2008-03-142015-09-24William J. JohnsonSystem and method for targeting data processing system(s) with data
US20160036958A1 (en)*2014-04-102016-02-04Twin Harbor Labs, LLCMethods and apparatus notifying a user of the operating condition of a remotely located household appliance
US20160158648A1 (en)*2014-12-052016-06-09Disney Enterprises, Inc.Automated selective scoring of user-generated content
US9946511B2 (en)2012-11-282018-04-17Google LlcMethod for user training of information dialogue system
US20180231653A1 (en)*2017-02-142018-08-16Microsoft Technology Licensing, LlcEntity-tracking computing system
US20180239377A1 (en)*2014-06-052018-08-23Wise Spaces Ltd.Home automation control system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6009392A (en)1998-01-151999-12-28International Business Machines CorporationTraining speech recognition by matching audio segment frequency of occurrence with frequency of words and letter combinations in a corpus
US20070078649A1 (en)*2003-02-212007-04-05Hetherington Phillip ASignature noise removal
US20050071159A1 (en)*2003-09-262005-03-31Robert BomanSpeech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20080159560A1 (en)*2006-12-302008-07-03Motorola, Inc.Method and Noise Suppression Circuit Incorporating a Plurality of Noise Suppression Techniques
US20150271654A1 (en)*2008-03-142015-09-24William J. JohnsonSystem and method for targeting data processing system(s) with data
US20150112678A1 (en)*2008-12-152015-04-23Audio Analytic LtdSound capturing and identifying devices
US20110289224A1 (en)*2009-01-302011-11-24Mitchell TrottMethods and systems for establishing collaborative communications between devices using ambient audio
US9946511B2 (en)2012-11-282018-04-17Google LlcMethod for user training of information dialogue system
US20160036958A1 (en)*2014-04-102016-02-04Twin Harbor Labs, LLCMethods and apparatus notifying a user of the operating condition of a remotely located household appliance
US20180239377A1 (en)*2014-06-052018-08-23Wise Spaces Ltd.Home automation control system
US20160158648A1 (en)*2014-12-052016-06-09Disney Enterprises, Inc.Automated selective scoring of user-generated content
US20180231653A1 (en)*2017-02-142018-08-16Microsoft Technology Licensing, LlcEntity-tracking computing system
US20180233139A1 (en)*2017-02-142018-08-16Microsoft Technology Licensing, LlcIntelligent digital assistant system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Benjamin Elizalde et al., An Approach for Self-Training Audio Event Detectors Using Web Data, Jun. 27, 2017.
IBM Cloud Docs / Speech to Text, Creating a custom acoustic model, https://console.bluemix.net/docs/services/speech-to-text/acoustic-create.html#acoustic, Last updated: Jan. 29, 2019.
Narayan, How to teach Neural Networks to detect everyday sounds, https://www.skcript.com/svr/building-audio-classifier-nueral-network/, Mar. 16, 2018.
Sound classification—Classify the source of sound in an audio file—Custom sound classification model, Feb. 4, 2019.

Also Published As

Publication numberPublication date
US20200327885A1 (en)2020-10-15

Similar Documents

PublicationPublication DateTitle
US10395494B2 (en)Systems and methods of home-specific sound event detection
US11300984B2 (en)Home automation control system
US10429177B2 (en)Blocked sensor detection and notification
US10354517B1 (en)Method of providing a human-perceptible indication of alarm monitoring system status
US11894015B2 (en)Embedded audio sensor system and methods
CN108600202B (en) An information processing method and device, and a computer-readable storage medium
US11585039B2 (en)Method for preventing accident performed by home appliance and cloud server using artificial intelligence
KR20240169137A (en)Determination of user presence and absence using wifi
US20220027725A1 (en)Sound model localization within an environment
EP3846144A1 (en)Apparatus and method for monitoring an access point
US12068001B2 (en)Acoustic event detection
US11393306B2 (en)Intruder detection method and apparatus
US11222625B2 (en)Systems and methods for training devices to recognize sound patterns
EP4066244B1 (en)Method for recognizing at least one naturally emitted sound produced by a real-life sound source in an environment comprising at least one artificial sound source, corresponding apparatus, computer program product and computer-readable carrier medium
TWI735121B (en)Security system
CN107851369B (en)Information processing apparatus, information processing method, and computer-readable storage medium
CN119206915A (en) Smart door lock alarm method, device, computer equipment and storage medium
CN110634253B (en)Doorbell abnormity warning method and related products
CN111183478B (en)Household electrical appliance system
CN113079251A (en)Alarm clock closing method applied to smart phone
CN210804546U (en)Intelligent home system based on iris lock
US20210090591A1 (en)Security system
KR20180082231A (en)Method and Device for sensing user designated audio signals
CN118692197A (en) Indoor intrusion alarm method, device, electronic equipment and storage medium
CN114821461A (en)Information processing method and electronic equipment

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:ADEMCO INC., MINNESOTA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAMPATH, PRADYUMNA;YELCHURU, RAMPRASAD;MANGSULI, PURNAPRAJNA R.;SIGNING DATES FROM 20190411 TO 20190415;REEL/FRAME:048887/0281

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

ASAssignment

Owner name:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT, NEW YORK

Free format text:SECURITY INTEREST;ASSIGNORS:ADEMCO INC.;RSI VIDEO TECHNOLOGIES, LLC;REEL/FRAME:055313/0715

Effective date:20210212

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:ADVISORY ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE

ASAssignment

Owner name:RESIDEO LLC, DELAWARE

Free format text:CHANGE OF NAME;ASSIGNOR:ADEMCO INC.;REEL/FRAME:071546/0001

Effective date:20241227

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp