Movatterモバイル変換


[0]ホーム

URL:


US10003688B1 - Systems and methods for cluster-based voice verification - Google Patents

Systems and methods for cluster-based voice verification
Download PDF

Info

Publication number
US10003688B1
US10003688B1US15/891,712US201815891712AUS10003688B1US 10003688 B1US10003688 B1US 10003688B1US 201815891712 AUS201815891712 AUS 201815891712AUS 10003688 B1US10003688 B1US 10003688B1
Authority
US
United States
Prior art keywords
caller
demographic
words
processor
telephone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/891,712
Inventor
Austin Walters
Jeremy Goodsitt
Fardin Abdi Taghi Abad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital One Services LLC
Original Assignee
Capital One Services LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US15/891,712priorityCriticalpatent/US10003688B1/en
Application filed by Capital One Services LLCfiledCriticalCapital One Services LLC
Assigned to CAPITAL ONE SERVICES, LLCreassignmentCAPITAL ONE SERVICES, LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: GOODSITT, JEREMY, ABDI TAGHI ABAD, FARDIN, WALTERS, AUSTIN
Priority to US15/980,214prioritypatent/US10091352B1/en
Publication of US10003688B1publicationCriticalpatent/US10003688B1/en
Application grantedgrantedCritical
Priority to US16/118,032prioritypatent/US10205823B1/en
Priority to EP19152182.2Aprioritypatent/EP3525209B1/en
Priority to CA3031819Aprioritypatent/CA3031819C/en
Priority to CA3115632Aprioritypatent/CA3115632C/en
Priority to US16/263,404prioritypatent/US10412214B2/en
Priority to US16/529,203prioritypatent/US10574812B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Systems for caller identification and authentication may include an authentication server. The authentication server may be configured to receive audio data including speech of a plurality of telephone calls, use audio data for at least a subset of the plurality of telephone calls to populate a plurality of word clusters each associated with a specific demographic, and/or use audio data for at least one of the plurality of telephone calls to identify the telephone caller making the telephone call based on determining a most similar word cluster of the plurality of word clusters to the audio data of the caller.

Description

BACKGROUND
Providers of secure user accounts, such as bank accounts, credit card accounts, and/or other secure accounts, may provide phone-based services to their users. For example, users wishing to set up new accounts may call a phone number to speak with an automated account system and/or a live representative. In another example, account holders may call a phone number to speak with an automated account system and/or a live representative in order to resolve issues with their account and/or access account features and/or functions. In another example, users may receive phone calls from the provider, for example when potential account fraud is detected and/or to offer account services. Because the user accounts may be related to sensitive information such as user identity information and/or access to user funds and/or credit, account providers may provide a variety of security measures to safeguard against fraud. In some situations, it may be useful to evaluate whether a caller is who they claim to be.
SUMMARY OF THE DISCLOSURE
Systems and methods described herein may help verify an identity of a user of phone-based account services. For example, a user's voice may be analyzed to determine whether it is characteristic of an expected user voice (e.g., the voice of the account holder). The analysis may involve determining whether the user's voice exhibits traits common to a known user demographic. Based at least in part on the analysis, the systems and methods described herein may evaluate a likelihood of fraud, for example determining whether a caller is likely the true account holder or not. Systems and methods described herein may also be trained with caller data from a plurality of callers to identify and/or sort traits common to one or more demographics.
Some embodiments of voice verification systems and methods may generate and use clusters of data for comparing with user voice data. A population may be divided into a set of demographics, for example based on geographic region, income level, and/or other sociological factors. Each demographic may have similar speech mannerisms. For example, a given demographic may include particular words in speech more frequently than other demographics, and/or a given demographic may pronounce words with specific sounds, emphases, timings, etc.
Disclosed embodiments may use known demographic data about callers to analyze callers' speech and characterize speech for the demographic(s) to which they belong. For example, a system performing speech analysis may have information about a caller's geographic location of residence and/or past residences and about the caller's income level and/or past income levels. This may be true because the caller may be an account holder who disclosed this information through account creation and/or maintenance, or the system may otherwise have access to this information. Accordingly, when an account holder's speech is analyzed, the data that results may be clustered together with data for other users known to have the same demographic information. Over time, the disclosed systems and methods may form clusters of data that accurately represent the specific speech mannerisms of specific demographics.
For example, a system configured to generate clusters may receive audio data including speech of a plurality of telephone calls. For at least a subset of the plurality of telephone calls, the system may determine demographic data for a telephone caller making the telephone call (e.g., based on an account associated with the caller). For at least the subset of the plurality of telephone calls, the system may analyze the audio data to identify a plurality of words from the speech of the telephone caller. In some embodiments, the system may also analyze the audio data to identify at least one acoustic characteristic of the speech of the telephone caller. In some embodiments, the system may correlate each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words. The system may then determine at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
In either case, the system may populate at least one word cluster with at least a subset of the plurality of words from the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller and/or populate at least one word cluster with at least a subset of the at least one acoustic characteristic of the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller. Each cluster may have a plurality of associated words from among at least the subset of the plurality of words and an occurrence frequency for each of the plurality of associated words that are characteristic to the cluster. Each cluster may also, or alternatively, have a plurality of associated acoustic characteristics that are characteristic to the cluster in some embodiments.
Once clusters are generated, they may be used to help verify a caller's identity. For example, account holders' voices may be analyzed to determine whether they are characteristic of any demographic indicated in their account data. In another example, prospective account holders' voices may be analyzed to identify demographic(s) to which they may be likely to belong. Based on the analysis, some embodiments disclosed herein may assess a threat level of a caller. For example, if a caller's demographic derived from voice analysis does not match any demographic associated with their account or prospective account, the analysis system may elevate a threat level for a caller, indicating that the caller may be attempting fraud (e.g., by impersonating the real account holder). This information may be added to other threat information collected by other systems and methods as part of a holistic threat score for the caller. In some embodiments, callers reaching a predetermined threat score threshold may be flagged for follow-up investigation and/or may have their account-related requests denied.
For example, a system configured to authenticate a telephone caller may receive audio data including speech of the telephone caller. The system may analyze the audio data to identify a plurality of words from the speech of the telephone caller and to identify an occurrence frequency for each of the plurality of words. In some embodiments, the system may analyze the audio data to identify at least one acoustic characteristic of the speech of the telephone caller. In some embodiments, the system may correlate each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words. The system may then determine at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
The system may compare the plurality of words, the occurrence frequencies, and/or the at least one acoustic characteristic of the speech to a plurality of word clusters. Each word cluster may comprise a plurality of associated words, an occurrence frequency for each of the plurality of associated words, and at least one associated acoustic characteristic. Each word cluster may be associated with one of a plurality of demographics.
The system may determine a most similar word cluster of the plurality of word clusters to the audio data based on a similarity of the plurality of words and the plurality of associated words of the most similar cluster, a similarity of the occurrence frequencies of the plurality of words and the occurrence frequencies of the plurality of associated words of the most similar cluster, and/or a similarity of the at least one acoustic characteristic of the speech of the telephone caller and the at least one associated acoustic characteristic of the most similar cluster.
The system may receive a purported identity of the telephone caller. The purported identity may include caller demographic data (e.g., based on an account associated with the caller and/or information provided by the caller during the call). For example, the caller demographic data may include current caller demographic data and/or historical caller demographic data. The system may compare the caller demographic data to the demographic associated with the most similar word cluster. Based on the comparing, the system may identify the telephone caller as likely having the purported identity if the caller demographic data (e.g., either current or historic) matches the demographic associated with the most similar word cluster. The system may identify the telephone caller as unlikely to have the purported identity if the caller demographic data matches a demographic associated with a word cluster different from the most similar word cluster.
The system may receive a threat score for the telephone caller. When the caller has a threat score, identifying the telephone caller as likely having the purported identity may include lowering the threat score or maintaining the threat score as received. Identifying the telephone caller as unlikely to have the purported identity may include raising the threat score.
The cluster-based voice analysis systems and methods described herein may provide several technological advantages. For example, by leveraging preexisting demographic data for callers, the disclosed systems and methods may train custom data clusters providing reliable representative data sets for speech patterns of callers fitting the demographics. The disclosed systems and methods may then be able to use the clusters to verify a caller's identity without the need to perform costly processing to exactly match the caller's voice to previously gathered recordings of the caller's voice and without having to store unique voiceprints for each known caller. Furthermore, because the clusters are specific to demographics rather than individual users, even callers who have never called before may be correlated with a demographic based on speech analysis. This effectively may mean that the disclosed systems and methods can perform voice verification for any given user without being trained on that particular user. These features may make the disclosed systems and methods better than traditional voice verification because of instant availability the first time a user calls. These features may also make the disclosed systems and methods better than traditional voice verification because there may be no need to gather, store, and continually train data for each user specifically. Instead, cluster data may be broadly applied to all users, significantly reducing processing complexity and data storage needs.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 shows a call analysis system according to an embodiment of the present disclosure.
FIG. 2 shows a server device according to an embodiment of the present disclosure.
FIG. 3 shows a cluster generation process according to an embodiment of the present disclosure.
FIG. 4 shows a caller verification process according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF SEVERAL EMBODIMENTS
FIG. 1 shows a call analysis system according to an embodiment of the present disclosure. The system may leverage atelephone network100, which may include at least one public switched telephone network, at least one cellular network, at least one data network (e.g., the Internet), or a combination thereof. User device112 may place a phone call throughtelephone network100 to phone-basedservice device114 or vice versa. User device112 may be a smartphone, tablet, computer, IP phone, landline phone, or other device configured to communicate by phone call. User device112 may be operated by an account holder, a potential account holder, or a fraudster attempting to access an account, for example. While one user device112 is shown inFIG. 1 for ease of illustration, any number of user devices112 may communicate usingtelephone network100. Phone-basedservice device114 may be a smartphone, tablet, computer, IP phone, landline phone, or other device configured to communicate by phone call. Phone-basedservice device114 may be operated by an account service provider and/or an employee thereof (e.g., phone-basedservice device114 may include a server configured to provide automated call processing services, a phone operated by a call center employee, or a combination thereof). While one phone-basedservice device114 is shown inFIG. 1 for ease of illustration, any number of phone-basedservice devices114 may communicate usingtelephone network100.
One ormore server devices102 may be connected to network100 and/or phone-basedservice device114.Server device102 may be a computing device, such as a server or other computer.Server device102 may include call analysis service104 configured to receive audio data for calls between user device112 and phone-basedservice device114 and analyze the audio data to assess caller demographics and/or identity, as described herein.Server device102 may receive the audio data throughnetwork100 and/or from phone-basedservice device114.Server device102 may include cluster database106.Server device102 may use cluster database to store data defining clusters of callers who fit various demographics whichserver device102 may generate over time as described herein.Server device102 may compare analyzed audio data to cluster data to determine a cluster demographic that best fits the caller, for example.Server device102 may also store audio data for analysis in cluster database106 and/or elsewhere inserver device102 memory.
Server device102 is depicted as a single server including a single call analysis service104 and cluster database106 inFIG. 1 for ease of illustration, but those of ordinary skill in the art will appreciate thatserver device102 may be embodied in different forms for different implementations. For example,server device102 may include a plurality of servers. Call analysis service104 may comprise a variety of services such as an audio analysis service, a word detection service, a cluster generation service, a cluster analysis service, a threat determination service, and/or other services, as described in greater detail herein.
FIG. 2 is a block diagram of anexample server device102 that may implement various features and processes as described herein. Theserver device102 may be implemented on any electronic device that runs software applications derived from compiled instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc. In some implementations, theserver device102 may include one ormore processors202, one ormore input devices204, one ormore display devices206, one ormore network interfaces208, and one or more computer-readable mediums210. Each of these components may be coupled bybus212.
Display device206 may be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology. Processor(s)202 may use any known processor technology, including but not limited to graphics processors and multi-core processors.Input device204 may be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display.Bus212 may be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire. Computer-readable medium210 may be any medium that participates in providing instructions to processor(s)202 for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.), or volatile media (e.g., SDRAM, ROM, etc.).
Computer-readable medium210 may includevarious instructions214 for implementing an operating system (e.g., Mac OS®, Windows®, Linux). The operating system may be multi-user, multiprocessing, multitasking, multithreading, real-time, and the like. The operating system may perform basic tasks, including but not limited to: recognizing input frominput device204; sending output to displaydevice206; keeping track of files and directories on computer-readable medium210; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic onbus212.Network communications instructions216 may establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, telephony, etc.).
Callanalysis service instructions218 can include instructions that provide call analysis related functions described herein. For example, callanalysis service instructions218 may identify words in call audio, build clusters based on caller demographics, compare caller information to clusters, assess caller identity, determine caller threat level, etc.
Application(s)220 may be an application that uses or implements the processes described herein and/or other processes. The processes may also be implemented inoperating system214.
The described features may be implemented in one or more computer programs that may be executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program may be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions may include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor may receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer may include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer may also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data may include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features may be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features may be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination thereof. The components of the system may be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a telephone network, a LAN, a WAN, and the computers and networks forming the Internet.
The computer system may include clients and servers. A client and server may generally be remote from each other and may typically interact through a network. The relationship of client and server may arise by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
One or more features or steps of the disclosed embodiments may be implemented using an API. An API may define one or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation.
The API may be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter may be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters may be implemented in any programming language. The programming language may define the vocabulary and calling convention that a programmer will employ to access functions supporting the API.
In some implementations, an API call may report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.
FIG. 3 shows acluster generation process300 according to an embodiment of the present disclosure.Server device102 may performcluster generation process300 for calls where a participant's identity is verifiable in some other way. For example,server device102 may performcluster generation process300 when an account holder has called from a known phone number and/or provided other indicia of their identity (e.g., provided data already found in their account data). In another example,server device102 may performcluster generation process300 when phone-basedservice device114 initiates the call to the account holder (e.g., to alert the account holder of account activity). In other embodiments,server device102 may performcluster generation process300 for any or all calls.
At302, one of user device112 and phone-basedservice device114 may initiate a phone call. In the following example, an account holder or other person operating user device112 is the caller, and the caller places a call to phone-basedservice device114. In this example,server device102 may analyze the voice of the caller. However, the opposite case may also be true, where phone-basedservice device114 places a call to user device112,server device102 may analyze the voice of the operator of user device112.
At304,server device102 may collect caller audio data. For example, call analysis service104 and/or phone-basedservice device114 may include telephony recording hardware, software, and/or firmware configured to record the caller's voice and deliver the recording to call analysis service104. The following steps ofcluster generation process300 may be performed in real time as the recording is fed to call analysis service104 or may be performed on recorded call audio after the user has spoken.
At306,server device102 may identify words and/or word counts in the caller audio data. For example, call analysis service104 may apply one or more machine learning and/or audio processing algorithms to the caller audio data to identify words and/or word counts. Suitable algorithms may include dynamic time warping, hidden Markov models, recurrent neural networks, and/or combinations thereof. For example, after likely words are identified using dynamic time warping audio analysis and/or hidden Markov prediction, recurrent neural network analysis may help identify which words were previously identified to better predict the current word being said. Through this processing, call analysis service104 may be able to isolate words that may be unique to certain demographics. For example, some demographics may use “y′all” or “you guys” instead of the word “you” more frequently in speech than other demographics. If a caller uses one of these characteristic words frequently, the word identification processing may report a relatively high count of that word from the speech analysis.
At308,server device102 may identify acoustic characteristics of the caller audio data. For example, call analysis service104 may use a fast Fourier transform (FFT) to convert the caller audio data into features that represent the tone, frequencies, speed, and/or loudness of the speaker. Call analysis service104 may use cadence background noises to compare similarities in places one makes calls from as a secondary identifier (e.g., if the background noise sounds similar each time a user calls, unusual background noises may indicate the caller is calling from an unexpected location and may not be who they claim to be). Through this processing, call analysis service104 may identify specific sounds that may be unique to certain demographics, such as tendencies to elongate or shorten vowel sounds and/or tendencies to speak more slowly or quickly than other demographics.
At310,server device102 may correlate the identified words and acoustic characteristics. For example, as words are identified atstep306, call analysis service104 may record data indicating a time at which each word was spoken. Furthermore, as sounds are identified atstep308, call analysis service104 may record data indicating a time at which each sound was uttered. By correlating the times at which words were spoken with the times at which sounds were made, call analysis service104 may determine how the caller pronounced each word. Call analysis service104 may use this information to identify pronunciations that may be unique to certain demographics. For example, once words and sounds are correlated, call analysis service104 may determine whether a caller elongates or shortens specific vowel sounds within specific words, how long the caller pauses between words, whether the caller's tone of voice raises or lowers at the beginnings or ends of words, whether the caller's volume of voice raises or lowers at the beginnings or ends of words, a speed at which the caller speaks, a pitch of the caller's voice, how the caller says certain specific words (e.g., “hello” or “goodbye”), and/or whether the caller has any other specific speech tendencies.
At312,server device102 may determine a demographic for the caller. For example, call analysis service104 may access account data for the caller. The account data may include the account holder's address of residence and previous addresses of residence. The account data may also include income information for the account holder. In some embodiments, the account data may include other information defining a demographic for the account holder (e.g., age, gender, occupation, etc.). Call analysis service104 may use one or more of these data points to determine the demographic. For example, the caller may belong to a geographically-defined demographic based on their current home address and/or a home address where they grew up. Call analysis service104 may select at least one determined demographic for the caller.
At314,server device102 may identify a cluster with a demographic similar to that of the caller. For example, call analysis service104 may locate a cluster in cluster database106 that is labeled with the determined demographic. If no such cluster exists in cluster database106, call analysis service104 may create the cluster in cluster database106.
At316,server device102 may populate the identified cluster with caller audio data. For example, call analysis service104 may add data describing the identified words and/or word counts from the caller audio data and/or data describing the identified audio characteristics from the caller audio data to the identified cluster in cluster database106. In some embodiments, call analysis service104 may compare the caller audio data with data already in the identified cluster to select a subset of the caller audio data for populating the identified cluster. For example, call analysis service104 may use K-means clustering to identify the centers of clusters based on one or more of the words, word counts, and/or characteristics, and the caller may be identified with the cluster which is closest in distance based on the caller's own words, word counts, and/or characteristics. After a large enough subset of data is collected, call analysis service104 may adjust centers of clusters to the mean of all data points considered to be within the cluster. Call analysis service104 may also use dynamic topic models for specific word clustering. With large enough new datasets, call analysis service104 may update dynamic topic model clusters in two phases: E-step and M-step (expectation maximization).
FIG. 4 shows acaller verification process400 according to an embodiment of the present disclosure.Server device102 may performcaller verification process400 to help determine whether a caller is who he or she claims to be. For example,server device102 may performcaller verification process400 for any calls placed while cluster database106 contains a robust and detailed set of clusters. Given a trained cluster set,server device102 may be able to determine whether a caller's voice is consistent with a demographic to which the caller is purported to belong. For example,server device102 may analyze the voice of a caller attempting to open a new account to determine whether the voice is consistent with demographic information provided by the caller as part of the account setup process. In another example,server device102 may analyze the voice of a caller attempting to access an account to determine whether the voice is consistent with known demographic(s) of the account holder.
At402, one of user device112 and phone-basedservice device114 may initiate a phone call. In the following example, an account holder or other person operating user device112 is the caller, and the caller places a call to phone-basedservice device114. In this example,server device102 may analyze the voice of the caller. However, the opposite case may also be true, where phone-basedservice device114 places a call to user device112,server device102 may analyze the voice of the operator of user device112.
At404,server device102 may collect caller audio data. For example, call analysis service104 and/or phone-basedservice device114 may include telephony recording hardware, software, and/or firmware configured to record the caller's voice and deliver the recording to call analysis service104. The following steps ofcaller verification process400 may be performed in real time as the recording is fed to call analysis service104 or may be performed on recorded call audio after the user has spoken.
At406,server device102 may identify words and/or word counts in the caller audio data. For example, call analysis service104 may apply one or more machine learning and/or audio processing algorithms to the caller audio data to identify words and/or word counts. Suitable algorithms may include dynamic time warping, hidden Markov models, recurrent neural networks, and/or combinations thereof. For example, after likely words are identified using dynamic time warping audio analysis and/or hidden Markov prediction, recurrent neural network analysis may help identify which words were previously identified to better predict the current word being said. Through this processing, call analysis service104 may be able to isolate words that may be unique to certain demographics. For example, some demographics may use “y′all” or “you guys” instead of the word “you” more frequently in speech than other demographics. If a caller uses one of these characteristic words frequently, the word identification processing may report a relatively high count of that word from the speech analysis.
At408,server device102 may identify acoustic characteristics of the caller audio data. For example, call analysis service104 may use a fast Fourier transform (FFT) to convert the caller audio data into features that represent the tone, frequencies, speed, and/or loudness of the speaker. Call analysis service104 may use cadence background noises to compare similarities in places one makes calls from as a secondary identifier (e.g., if the background noise sounds similar each time a user calls, unusual background noises may indicate the caller is calling from an unexpected location and may not be who they claim to be). Through this processing, call analysis service104 may identify specific sounds that may be unique to certain demographics, such as tendencies to elongate or shorten vowel sounds and/or tendencies to speak more slowly or quickly than other demographics.
At410,server device102 may correlate the identified words and acoustic characteristics. For example, as words are identified atstep406, call analysis service104 may record data indicating a time at which each word was spoken. Furthermore, as sounds are identified atstep408, call analysis service104 may record data indicating a time at which each sound was uttered. By correlating the times at which words were spoken with the times at which sounds were made, call analysis service104 may determine how the caller pronounced each word. Call analysis service104 may use this information to identify pronunciations that may be unique to certain demographics. For example, once words and sounds are correlated, call analysis service104 may determine whether a caller elongates or shortens specific vowel sounds within specific words, how long the caller pauses between words, whether the caller's tone of voice raises or lowers at the beginnings or ends of words, whether the caller's volume of voice raises or lowers at the beginnings or ends of words, a speed at which the caller speaks, a pitch of the caller's voice, how the caller says certain specific words (e.g., “hello” or “goodbye”), and/or whether the caller has any other specific speech tendencies.
At412,server device102 may compare the identified words and/or acoustic characteristics with the clusters in cluster database106. For example, call analysis service104 may use a K-nearest neighbors algorithm to compare the identified words and/or acoustic characteristics with the K-means and/or dynamic topic models generated as described above. Through this processing, call analysis service104 may identify a cluster in cluster database106 that contains data that is most similar to the user's speech. The identified cluster may be associated with a particular demographic.
At414,server device102 may determine a demographic for the caller. For example, call analysis service104 may access account data for the caller. The account data may include the account holder's address of residence and previous addresses of residence. The account data may also include income information for the account holder. In some embodiments, the account data may include other information defining a demographic for the account holder (e.g., age, gender, occupation, etc.). Call analysis service104 may use one or more of these data points to determine the demographic. For example, the caller may belong to a geographically-defined demographic based on their current home address and/or a home address where they grew up. In some situations, for example when the caller is attempting to open an account, call analysis service104 may not have access to predetermined caller demographic data. In these cases, call analysis service104 may determine the caller's demographic based on information about the call (e.g., a phone number for the caller or an IP address for the caller) and/or based on information provided by the caller (e.g., one or more spoken addresses of past or current residence and/or income level provided by the caller). Call analysis service104 may select at least one determined demographic for the caller.
At416,server device102 may compare the caller's demographic with the demographic of the cluster from cluster database106 that most nearly matches the identified words and/or acoustic characteristics from the audio data. For example, the caller may say they are a specific account holder, and that specific account holder may have a particular income level (e.g., $100,000/yr) and/or current and/or historical addresses (e.g., the account holder may have been born and raised in Alabama and may now live in Ohio). In another example, the caller may self-report the income level and/or current and/or historical addresses to provide background information to open an account. In some embodiments, the income level and/or current and/or historical addresses may be obtained from credit rating bureaus and/or from data associated with other known accounts. Call analysis service104 may compare this account holder information or self-reported information with the demographic information associated with the cluster from cluster database106 that most nearly matches the caller's speech.
At418,server device102 may determine whether the demographics match and indicate a result. For example, call analysis service104 may receive a threat level score for the user. The threat level score may be a score that takes a variety of security-related factors into account to assess whether a caller is attempting fraudulent activity. In this example, a higher score may indicate a higher risk of fraud, although other embodiments may score likelihood of fraud differently (e.g., a lower score indicates a higher risk of fraud). Continuing the example, the cluster from cluster database106 that most nearly matches the caller's speech may be a cluster of callers who earn $100,000/yr from Alabama. In this case, call analysis service104 may determine that the caller's demographic matches the cluster's demographic and, therefore, the identity provided by the caller is likely to be correct. To indicate that the caller's identity is likely correct, call analysis service104 may either downgrade the threat score or maintain the score at the same level. In an alternative example, the cluster from cluster database106 that most nearly matches the caller's speech may be a cluster of callers who earn $30,000/yr from Florida. In this case, call analysis service104 may determine that the caller's demographic does not match the cluster's demographic and, therefore, the identity provided by the caller is unlikely to be correct. To indicate that the caller's identity is not likely to be correct, call analysis service104 may upgrade the threat score. Call analysis service104 may report the threat score as adjusted throughprocess400, for example by providing the score to the operator of phone-basedservice device114 and/or to a fraud prevention system for further analysis and/or action (e.g., analyzing the caller's actions for fraudulent activity, analyzing the account for fraudulent activity, blocking actions taken to affect the account, etc.).
While various embodiments have been described above, it should be understood that they have been presented by way of example and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. For example, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
In addition, it should be understood that any figures which highlight the functionality and advantages are presented for example purposes only. The disclosed methodology and system are each sufficiently flexible and configurable such that they may be utilized in ways other than that shown.
Although the term “at least one” may often be used in the specification, claims and drawings, the terms “a”, “an”, “the”, “said”, etc. also signify “at least one” or “the at least one” in the specification, claims and drawings.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112(f). Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112(f).

Claims (20)

What is claimed is:
1. A method of authenticating a telephone caller, the method comprising:
receiving, by a processor of an authentication server, audio data including speech of the telephone caller;
analyzing, by the processor, the audio data to identify a plurality of words from the speech of the telephone caller and to identify an occurrence frequency for each of the plurality of words;
comparing, by the processor, the plurality of words and the occurrence frequencies to a plurality of word clusters, each word cluster comprising a plurality of associated words and an occurrence frequency for each of the plurality of associated words, and each word cluster being associated with one of a plurality of demographics;
determining, by the processor, a most similar word cluster of the plurality of word clusters to the audio data based on a similarity of the plurality of words and the plurality of associated words of the most similar cluster and a similarity of the occurrence frequencies of the plurality of words and the occurrence frequencies of the plurality of associated words of the most similar cluster;
receiving, by the processor, a purported identity of the telephone caller, the purported identity including caller demographic data;
comparing, by the processor, the caller demographic data to the demographic associated with the most similar word cluster; and
identifying, by the processor, the telephone caller as at least one of:
likely having the purported identity in response to determining the caller demographic data matches the demographic associated with the most similar word cluster, and
unlikely to have the purported identity in response to determining the caller demographic data matches a demographic associated with a word cluster different from the most similar word cluster.
2. The method ofclaim 1, further comprising:
analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller; and
comparing, by the processor, the at least one acoustic characteristic of the speech of the telephone caller to the plurality of word clusters, each word cluster further comprising at least one associated acoustic characteristic;
wherein the determining, by the processor, the most similar word cluster of the plurality of word clusters to the audio data is further based on a similarity of the at least one acoustic characteristic of the speech of the telephone caller and the at least one associated acoustic characteristic of the most similar cluster.
3. The method ofclaim 2, wherein the analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller comprises:
correlating, by the processor, each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words; and
determining, by the processor, at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
4. The method ofclaim 1, wherein:
the caller demographic data comprises current caller demographic data and historical caller demographic data;
determining the caller demographic data matches the demographic associated with the most similar word cluster comprises determining at least one of the current caller demographic data and the historical caller demographic data matches the demographic associated with the most similar word cluster; and
determining the caller demographic data matches the demographic associated with the word cluster different from the most similar word cluster comprises determining at least one of the current caller demographic data and the historical caller demographic data matches the demographic associated with the word cluster different from the most similar word cluster.
5. The method ofclaim 1, further comprising:
receiving, by the processor, a threat score for the telephone caller;
wherein the identifying, by the processor, the telephone caller as likely having the purported identity comprises lowering the threat score or maintaining the threat score as received.
6. The method ofclaim 1, further comprising:
receiving, by the processor, a threat score for the telephone caller;
wherein the identifying, by the processor, the telephone caller as unlikely to have the purported identity comprises raising the threat score.
7. A method of identifying a telephone caller, the method comprising:
receiving, by a processor of an authentication server, audio data including speech of a plurality of telephone calls;
for at least a subset of the plurality of telephone calls, determining, by the processor, demographic data for a telephone caller making the telephone call;
for at least the subset of the plurality of telephone calls, analyzing, by the processor, the audio data to identify a plurality of words from the speech of the telephone caller;
receiving, by the processor, a plurality of word clusters, each word cluster associated with a specific demographic;
populating, by the processor, at least one word cluster with at least a subset of the plurality of words from the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller;
for each word cluster, determining, by the processor, a plurality of associated words from among at least the subset of the plurality of words and an occurrence frequency for each of the plurality of associated words; and
for at least one of the plurality of telephone calls:
analyzing, by the processor, the audio data to identify a plurality of words from the speech of the telephone caller and to identify an occurrence frequency for each of the plurality of words,
comparing, by the processor, the plurality of words from the speech of the telephone caller and the occurrence frequency for each of the plurality of words from the speech of the telephone caller to the plurality of word clusters,
based on the comparing, identifying, by the processor, a most similar word cluster of the plurality of word clusters to the audio data based on a similarity of the plurality of words from the speech of the telephone caller and the plurality of associated words of the most similar cluster and a similarity of the occurrence frequencies of the plurality of words from the speech of the telephone caller and the occurrence frequencies of the plurality of associated words of the most similar cluster, and
determining, by the processor, a caller demographic of the telephone caller, the caller demographic being the same as the demographic of the most similar word cluster.
8. The method ofclaim 7, further comprising, for at least the subset of the plurality of telephone calls:
analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller; and
populating, by the processor, at least one word cluster with at least a subset of the at least one acoustic characteristic of the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller.
9. The method ofclaim 8, wherein the analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller comprises:
correlating, by the processor, each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words; and
determining, by the processor, at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
10. The method ofclaim 8, further comprising, for the at least one of the plurality of telephone calls:
analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller;
comparing, by the processor, the at least one acoustic characteristic of the speech of the telephone caller to the plurality of word clusters;
wherein the determining, by the processor, the most similar word cluster of the plurality of word clusters to the audio data is further based on a similarity of the at least one acoustic characteristic of the speech of the telephone caller and the at least one associated acoustic characteristic of the most similar cluster.
11. The method ofclaim 10, wherein the analyzing, by the processor, the audio data to identify at least one acoustic characteristic of the speech of the telephone caller comprises:
correlating, by the processor, each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words; and
determining, by the processor, at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
12. The method ofclaim 7, further comprising:
receiving, by the processor, a purported identity of the telephone caller, the purported identity including a purported demographic;
comparing, by the processor, the caller demographic to the purported demographic; and
identifying, by the processor, the telephone caller as at least one of:
likely having the purported identity in response to determining the caller demographic matches the purported demographic, and
unlikely to have the purported identity in response to determining the caller demographic matches a demographic other than the purported demographic.
13. The method ofclaim 12, wherein:
the purported identity comprises current caller demographic data and historical caller demographic data;
determining the caller demographic matches the purported demographic comprises determining at least one of the current caller demographic data and the historical caller demographic data matches the caller demographic; and
determining the caller demographic data matches the demographic other than the purported demographic comprises determining neither of the current caller demographic data and the historical caller demographic data matches the caller demographic.
14. The method ofclaim 12, further comprising:
receiving, by the processor, a threat score for the telephone caller;
wherein the identifying, by the processor, the telephone caller as likely having the purported identity comprises lowering the threat score or maintaining the threat score as received.
15. The method ofclaim 12, further comprising:
receiving, by the processor, a threat score for the telephone caller;
wherein the identifying, by the processor, the telephone caller as unlikely to have the purported identity comprises raising the threat score.
16. A system for caller identification and authentication, the system comprising:
a telephony recorder configured to record audio data for calls placed to at least one phone number;
an authentication server comprising a processor and a non-transitory memory, the memory storing instructions that, when executed by the processor, cause the processor to perform processing comprising:
receiving audio data including speech of a plurality of telephone calls;
using audio data for at least a subset of the plurality of telephone calls to populate a plurality of word clusters, each word cluster being associated with a specific demographic, the populating of the plurality of word clusters comprising:
for each of the subset of the plurality of telephone calls, determining demographic data for a telephone caller making the telephone call, and analyzing the audio data to identify a plurality of words from the speech of the telephone caller, and
populating at least one word cluster with at least a subset of the plurality of words from the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller; and
using audio data for at least one of the plurality of telephone calls to identify the telephone caller making the telephone call, the identifying comprising:
analyzing the audio data to identify a plurality of words from the speech of the telephone caller and to identify an occurrence frequency for each of the plurality of words,
comparing, the plurality of words and the occurrence frequencies to the plurality of word clusters,
determining a most similar word cluster of the plurality of word clusters to the audio data based on a similarity of the plurality of words and the plurality of associated words of the most similar cluster and a similarity of the occurrence frequencies of the plurality of words and occurrence frequencies of the plurality of associated words of the most similar cluster,
receiving a purported identity of the telephone caller, the purported identity including caller demographic data,
determining whether the caller demographic data matches the demographic associated with the most similar word cluster, and
identifying the telephone caller as:
likely having the purported identity in response to determining that the caller demographic data matches the demographic associated with the most similar word cluster, or
unlikely to have the purported identity in response to determining that the caller demographic data does not match the demographic associated with the most similar word cluster.
17. The system ofclaim 16, wherein the instructions further cause the processor to perform processing comprising, for at least the subset of the plurality of telephone calls:
analyzing the audio data to identify at least one acoustic characteristic of the speech of the telephone caller; and
populating at least one word cluster with at least a subset of the at least one acoustic characteristic of the speech of each telephone caller associated with the specific demographic based on the demographic data for the telephone caller.
18. The system ofclaim 17, wherein the analyzing of the audio data to identify at least one acoustic characteristic of the speech of the telephone caller comprises:
correlating each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words; and
determining at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
19. The system ofclaim 16, wherein the instructions further cause the processor to perform processing comprising, for the at least one of the plurality of telephone calls:
analyzing the audio data to identify at least one acoustic characteristic of the speech of the telephone caller;
comparing the at least one acoustic characteristic of the speech of the telephone caller to the plurality of word clusters;
wherein the determining the most similar word cluster of the plurality of word clusters to the audio data is further based on a similarity of the at least one acoustic characteristic of the speech of the telephone caller and the at least one associated acoustic characteristic of the most similar cluster.
20. The system ofclaim 19, wherein the analyzing the audio data to identify at least one acoustic characteristic of the speech of the telephone caller comprises:
correlating each of a plurality of portions of an acoustic or frequency component of the audio data with each of at least a subset of the plurality of words; and
determining at least one acoustic characteristic for how the telephone caller says at least one of the subset of the plurality of words based on the portion of the acoustic or frequency component of the audio data correlated with the at least one of the subset of the plurality of words.
US15/891,7122018-02-082018-02-08Systems and methods for cluster-based voice verificationActiveUS10003688B1 (en)

Priority Applications (8)

Application NumberPriority DateFiling DateTitle
US15/891,712US10003688B1 (en)2018-02-082018-02-08Systems and methods for cluster-based voice verification
US15/980,214US10091352B1 (en)2018-02-082018-05-15Systems and methods for cluster-based voice verification
US16/118,032US10205823B1 (en)2018-02-082018-08-30Systems and methods for cluster-based voice verification
EP19152182.2AEP3525209B1 (en)2018-02-082019-01-16Systems and methods for cluster-based voice verification
CA3031819ACA3031819C (en)2018-02-082019-01-29Systems and methods for cluster-based voice verification
CA3115632ACA3115632C (en)2018-02-082019-01-29Systems and methods for cluster-based voice verification
US16/263,404US10412214B2 (en)2018-02-082019-01-31Systems and methods for cluster-based voice verification
US16/529,203US10574812B2 (en)2018-02-082019-08-01Systems and methods for cluster-based voice verification

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US15/891,712US10003688B1 (en)2018-02-082018-02-08Systems and methods for cluster-based voice verification

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US15/980,214ContinuationUS10091352B1 (en)2018-02-082018-05-15Systems and methods for cluster-based voice verification

Publications (1)

Publication NumberPublication Date
US10003688B1true US10003688B1 (en)2018-06-19

Family

ID=62554825

Family Applications (5)

Application NumberTitlePriority DateFiling Date
US15/891,712ActiveUS10003688B1 (en)2018-02-082018-02-08Systems and methods for cluster-based voice verification
US15/980,214ActiveUS10091352B1 (en)2018-02-082018-05-15Systems and methods for cluster-based voice verification
US16/118,032ActiveUS10205823B1 (en)2018-02-082018-08-30Systems and methods for cluster-based voice verification
US16/263,404ActiveUS10412214B2 (en)2018-02-082019-01-31Systems and methods for cluster-based voice verification
US16/529,203ActiveUS10574812B2 (en)2018-02-082019-08-01Systems and methods for cluster-based voice verification

Family Applications After (4)

Application NumberTitlePriority DateFiling Date
US15/980,214ActiveUS10091352B1 (en)2018-02-082018-05-15Systems and methods for cluster-based voice verification
US16/118,032ActiveUS10205823B1 (en)2018-02-082018-08-30Systems and methods for cluster-based voice verification
US16/263,404ActiveUS10412214B2 (en)2018-02-082019-01-31Systems and methods for cluster-based voice verification
US16/529,203ActiveUS10574812B2 (en)2018-02-082019-08-01Systems and methods for cluster-based voice verification

Country Status (3)

CountryLink
US (5)US10003688B1 (en)
EP (1)EP3525209B1 (en)
CA (2)CA3115632C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10205823B1 (en)*2018-02-082019-02-12Capital One Services, LlcSystems and methods for cluster-based voice verification
CN109346089A (en)*2018-09-272019-02-15深圳市声扬科技有限公司Living body identity identifying method, device, computer equipment and readable storage medium storing program for executing
US10477021B1 (en)*2018-11-292019-11-12Capital One Services, LlcSystems for detecting harassing communication
CN110648670A (en)*2019-10-222020-01-03中信银行股份有限公司Fraud identification method and device, electronic equipment and computer-readable storage medium
CN110942783A (en)*2019-10-152020-03-31国家计算机网络与信息安全管理中心Group call type crank call classification method based on audio multistage clustering
CN110970036A (en)*2019-12-242020-04-07网易(杭州)网络有限公司Voiceprint recognition method and device, computer storage medium and electronic equipment
CN112687278A (en)*2020-12-032021-04-20科大讯飞股份有限公司Identity verification method, electronic equipment and storage device
US20210258423A1 (en)*2019-07-302021-08-19Nice LtdMethod and system for proactive fraudster exposure in a customer service channel

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11099023B1 (en)2016-01-052021-08-24Open Invention Network LlcIntermediate navigation destinations
US11758041B2 (en)2018-08-072023-09-12First Orion Corp.Call content management for mobile devices
US11019204B1 (en)2018-08-072021-05-25First Orion Corp.Call content management for mobile devices
US10694035B2 (en)2018-08-072020-06-23First Orion Corp.Call content management for mobile devices
US10601986B1 (en)*2018-08-072020-03-24First Orion Corp.Call screening service for communication devices
CN109637547B (en)*2019-01-292020-11-03北京猎户星空科技有限公司Audio data labeling method and device, electronic equipment and storage medium
US11893976B2 (en)*2020-07-062024-02-06Samsung Electronics Co., Ltd.Electronic device and operation method thereof

Citations (132)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040193740A1 (en)2000-02-142004-09-30Nice Systems Ltd.Content-based storage management
US20050018622A1 (en)2002-06-132005-01-27Nice Systems Ltd.Method for forwarding and storing session packets according to preset and /or dynamic rules
US20050123115A1 (en)2000-08-282005-06-09Nice Systems, Ltd.Digital recording of ip based distributed switching platform
US20060045185A1 (en)2004-08-312006-03-02Ramot At Tel-Aviv University Ltd.Apparatus and methods for the detection of abnormal motion in a video stream
US20060074660A1 (en)*2004-09-292006-04-06France TelecomMethod and apparatus for enhancing speech recognition accuracy by using geographic data to filter a set of words
US20060136597A1 (en)2004-12-082006-06-22Nice Systems Ltd.Video streaming parameter optimization and QoS
US20060133624A1 (en)2003-08-182006-06-22Nice Systems Ltd.Apparatus and method for audio content analysis, marking and summing
US20060179064A1 (en)2005-02-072006-08-10Nice Systems Ltd.Upgrading performance using aggregated information shared between management systems
US20060227719A1 (en)2005-03-162006-10-12Nice Systems Ltd.Third party recording of data transferred using the IP protocol
US20060268847A1 (en)2002-06-132006-11-30Nice Systems Ltd.Voice over IP capturing
US20060285665A1 (en)2005-05-272006-12-21Nice Systems Ltd.Method and apparatus for fraud detection
US20070122003A1 (en)2004-01-122007-05-31Elbit Systems Ltd.System and method for identifying a threat associated person among a crowd
US20070250318A1 (en)2006-04-252007-10-25Nice Systems Ltd.Automatic speech analysis
US20080040110A1 (en)2005-08-082008-02-14Nice Systems Ltd.Apparatus and Methods for the Detection of Emotions in Audio Interactions
US20080066184A1 (en)2006-09-132008-03-13Nice Systems Ltd.Method and system for secure data collection and distribution
US20080139167A1 (en)*1999-04-162008-06-12Shelia Jean BurgessCommunications Control Method And Apparatus
US20080148397A1 (en)2006-10-262008-06-19Nice Systems Ltd.Method and apparatus for lawful interception of web based messaging communication
US20080152122A1 (en)2006-12-202008-06-26Nice Systems Ltd.Method and system for automatic quality evaluation
US20080181417A1 (en)2006-01-252008-07-31Nice Systems Ltd.Method and Apparatus For Segmentation of Audio Interactions
US20080189171A1 (en)2007-02-012008-08-07Nice Systems Ltd.Method and apparatus for call categorization
US20080195385A1 (en)2007-02-112008-08-14Nice Systems Ltd.Method and system for laughter detection
US20080195387A1 (en)2006-10-192008-08-14Nice Systems Ltd.Method and apparatus for large population speaker identification in telephone interactions
US20080228296A1 (en)2007-03-122008-09-18Nice Systems Ltd.Method and apparatus for generic analytics
US20090007263A1 (en)2006-05-182009-01-01Nice Systems Ltd.Method and Apparatus for Combining Traffic Analysis and Monitoring Center in Lawful Interception
US20090012826A1 (en)2007-07-022009-01-08Nice Systems Ltd.Method and apparatus for adaptive interaction analytics
US20090033745A1 (en)2002-02-062009-02-05Nice Systems, Ltd.Method and apparatus for video frame sequence-based object tracking
US20090043573A1 (en)2007-08-092009-02-12Nice Systems Ltd.Method and apparatus for recognizing a speaker in lawful interception systems
US20090150152A1 (en)2007-11-182009-06-11Nice SystemsMethod and apparatus for fast search in call-center monitoring
US20090292541A1 (en)2008-05-252009-11-26Nice Systems Ltd.Methods and apparatus for enhancing speech analytics
US20090292583A1 (en)2008-05-072009-11-26Nice Systems Ltd.Method and apparatus for predicting customer churn
US20100070276A1 (en)2008-09-162010-03-18Nice Systems Ltd.Method and apparatus for interaction or discourse analytics
US20100088323A1 (en)2008-10-062010-04-08Nice Systems Ltd.Method and apparatus for visualization of interaction categorization
US20100106499A1 (en)2008-10-272010-04-29Nice Systems LtdMethods and apparatus for language identification
US20100161604A1 (en)2008-12-232010-06-24Nice Systems LtdApparatus and method for multimedia content based manipulation
US20100199189A1 (en)2006-03-122010-08-05Nice Systems, Ltd.Apparatus and method for target oriented law enforcement interception and analysis
US20100228656A1 (en)2009-03-092010-09-09Nice Systems Ltd.Apparatus and method for fraud prevention
US20100246799A1 (en)2009-03-312010-09-30Nice Systems Ltd.Methods and apparatus for deep interaction analysis
US20110004473A1 (en)2009-07-062011-01-06Nice Systems Ltd.Apparatus and method for enhanced speech recognition
US20110206198A1 (en)2004-07-142011-08-25Nice Systems Ltd.Method, apparatus and system for capturing and analyzing interaction based content
US20110208522A1 (en)2010-02-212011-08-25Nice Systems Ltd.Method and apparatus for detection of sentiment in automated transcriptions
US20110282661A1 (en)2010-05-112011-11-17Nice Systems Ltd.Method for speaker source classification
US20110307257A1 (en)2010-06-102011-12-15Nice Systems Ltd.Methods and apparatus for real-time interaction analysis in call centers
US20110307258A1 (en)2010-06-102011-12-15Nice Systems Ltd.Real-time application of interaction anlytics
US20120053990A1 (en)2008-05-072012-03-01Nice Systems Ltd.System and method for predicting customer churn
US8140392B2 (en)*2003-10-062012-03-20Utbk, Inc.Methods and apparatuses for pay for lead advertisements
US20120116766A1 (en)2010-11-072012-05-10Nice Systems Ltd.Method and apparatus for large vocabulary continuous speech recognition
US20120114314A1 (en)2008-09-112012-05-10Nice Systems Ltd.Method and system for utilizing storage in network video recorders
US20120155663A1 (en)2010-12-162012-06-21Nice Systems Ltd.Fast speaker hunting in lawful interception systems
US8238540B1 (en)*2008-09-082012-08-07RingRevenue, Inc.Methods and systems for processing and managing telephonic communications using ring pools
US20120209605A1 (en)2011-02-142012-08-16Nice Systems Ltd.Method and apparatus for data exploration of interactions
US20120209606A1 (en)2011-02-142012-08-16Nice Systems Ltd.Method and apparatus for information extraction from interactions
US8248237B2 (en)*2008-04-022012-08-21Yougetitback LimitedSystem for mitigating the unauthorized use of a device
US20120215535A1 (en)2011-02-232012-08-23Nice Systems Ltd.Method and apparatus for automatic correlation of multi-channel interactions
US20120296642A1 (en)2011-05-192012-11-22Nice Systems Ltd.Method and appratus for temporal speech scoring
US20130057721A1 (en)2009-11-172013-03-07Nice-Systems Ltd.Automatic control of visual parameters in video processing
US8411828B2 (en)*2008-10-172013-04-02Commonwealth Intellectual Property Holdings, Inc.Intuitive voice navigation
US20130142318A1 (en)2009-12-312013-06-06Nice-Systems Ltd.Peer-to-peer telephony recording
US8532630B2 (en)*2004-11-242013-09-10Vascode Technologies Ltd.Unstructured supplementary service data application within a wireless network
US20130315382A1 (en)2012-05-242013-11-28Nice Systems Ltd.System and method for robust call center operation using multiple data centers
US8599836B2 (en)*2010-01-272013-12-03Neobitspeak LLCWeb-based, hosted, self-service outbound contact center utilizing speaker-independent interactive voice response and including enhanced IP telephony
US20130336137A1 (en)2012-06-152013-12-19Nice Systems Ltd.System and method for situation-aware ip-based communication interception and intelligence extraction
US20140025376A1 (en)2012-07-172014-01-23Nice-Systems LtdMethod and apparatus for real time sales optimization based on audio interactions analysis
US8649516B2 (en)2001-04-112014-02-11Nice-Systems Ltd.Digital video protection for authenticity verification
US20140050309A1 (en)2011-11-222014-02-20Nice Systems Ltd.System and method for real-time customized agent training
US20140049600A1 (en)2012-08-162014-02-20Nice-Systems Ltd.Method and system for improving surveillance of ptz cameras
US8660849B2 (en)*2010-01-182014-02-25Apple Inc.Prioritizing selection criteria by automated assistant
US20140067373A1 (en)2012-09-032014-03-06Nice-Systems LtdMethod and apparatus for enhanced phonetic indexing and search
US20140106776A1 (en)2010-05-242014-04-17Nice-Systems Ltd.Method and system for estimation of mobile station velocity in a cellular system based on geographical data
US20140114891A1 (en)2011-07-192014-04-24Nice-Systems Ltd.Distributed scalable incrementally updated models in decisioning systems
US20140119438A1 (en)2010-02-022014-05-01Nice-Systems Ltd.System and method for relative storage of video data
US20140122311A1 (en)2012-05-092014-05-01Nice-Systems Ltd.System and method for determining a risk root cause
US20140129299A1 (en)2012-11-062014-05-08Nice-Systems LtdMethod and apparatus for detection and analysis of first contact resolution failures
US20140146958A1 (en)2012-11-282014-05-29Nice-Systems Ltd.System and method for real-time process management
US20140149488A1 (en)2012-11-262014-05-29Nice-Systems Ltd.System and method for engaging a mobile device
US20140172859A1 (en)2012-12-132014-06-19Nice-Systems LtdMethod and apparatus for trade interaction chain reconstruction
US8768313B2 (en)*2009-08-172014-07-01Digimarc CorporationMethods and systems for image or audio recognition processing
US8781923B2 (en)*2001-01-192014-07-15C-Sam, Inc.Aggregating a user's transactions across a plurality of service institutions
US20140200929A1 (en)*2008-04-022014-07-17Yougetitback LimitedSystems and methods for dynamically assessing and mitigating risk of an insured entity
US20140257820A1 (en)2013-03-102014-09-11Nice-Systems LtdMethod and apparatus for real time emotion detection in audio interactions
US20140280172A1 (en)2013-03-132014-09-18Nice-Systems Ltd.System and method for distributed categorization
US20140270118A1 (en)2013-03-142014-09-18Nice-Systems LtdMethod and apparatus for fail-safe control of recordings
US20140280431A1 (en)2013-03-132014-09-18Nice-Systems Ltd.System and method for interoperability between flex applications and .net applications
US8847285B2 (en)*2011-09-262014-09-30Semiconductor Components Industries, LlcDepleted charge-multiplying CCD image sensor
US20140330563A1 (en)2013-05-022014-11-06Nice-Systems Ltd.Seamless authentication and enrollment
US20140328512A1 (en)2013-05-052014-11-06Nice Systems Ltd.System and method for suspect search
US8909590B2 (en)2011-09-282014-12-09Nice Systems Technologies Uk LimitedOnline asynchronous reinforcement learning from concurrent customer histories
US8913880B1 (en)2013-06-232014-12-16Nice-Systems Ltd.Method and apparatus for managing video storage
US8914314B2 (en)2011-09-282014-12-16Nice Systems Technologies Uk LimitedOnline temporal difference learning from incomplete customer interaction histories
US8917860B2 (en)*2008-09-082014-12-23Invoca, Inc.Methods and systems for processing and managing communications
US8932368B2 (en)*2008-04-012015-01-13Yougetitback LimitedMethod for monitoring the unauthorized use of a device
US20150032448A1 (en)2013-07-252015-01-29Nice-Systems LtdMethod and apparatus for expansion of search queries on large vocabulary continuous speech recognition transcripts
US20150033331A1 (en)2013-07-242015-01-29Nice-Systems Ltd.System and method for webpage analysis
US20150074296A1 (en)2013-09-082015-03-12Nice-Systems Ltd.Edge devices settings via external source
US20150098561A1 (en)2013-10-082015-04-09Nice-Systems Ltd.System and method for real-time monitoring of a contact center using a mobile computer
US20150112947A1 (en)2013-10-222015-04-23Nice Systems Ltd.System and method for database flow management
US9031536B2 (en)*2008-04-022015-05-12Yougetitback LimitedMethod for mitigating the unauthorized use of a device
US9036808B2 (en)*2008-09-082015-05-19Invoca, Inc.Methods and systems for data transfer and campaign management
US20150142704A1 (en)*2013-11-202015-05-21Justin LondonAdaptive Virtual Intelligent Agent
US20150163532A1 (en)2013-12-052015-06-11Nice-Systems Ltd.Method and apparatus for managing video storage
US20150189078A1 (en)2013-12-312015-07-02Nice-Systems Ltd.Call recording with interaction metadata correlation
US20150189240A1 (en)2013-12-292015-07-02Nice-Systems Ltd.System and method for detecting an object of interest
US9098361B1 (en)2014-03-192015-08-04Nice-Systems Ltd.System and method for interoperability between an embedded activex control and an external application
US20150227437A1 (en)2014-02-102015-08-13Nice-Systems Ltd.Storage device as buffer for unreliable storage
US20150235159A1 (en)2014-02-202015-08-20Nice-Systems Ltd.System and method for contact center agent guidance with feedback
US20150242285A1 (en)2014-02-272015-08-27Nice-Systems Ltd.Persistency free architecture
US20150254233A1 (en)2014-03-062015-09-10Nice-Systems LtdText-based unsupervised learning of language models
US20150271330A1 (en)2014-03-212015-09-24Nice-Systems Ltd.System and method for recording and reporting compliance data
US20150278192A1 (en)2014-03-252015-10-01Nice-Systems LtdLanguage model adaptation based on filtered data
US20150295850A1 (en)2014-04-112015-10-15Nice-Systems Ltd.Spare resource election in a computing system
US20150293755A1 (en)2014-04-092015-10-15Nice-Systems Ltd.System and automated method for configuring a predictive model and deploying it on a target platform
US20150334234A1 (en)2014-05-152015-11-19Nice-Systems LtdStructured storage management for interactions recording
US20150370784A1 (en)2014-06-182015-12-24Nice-Systems LtdLanguage model adaptation for specific texts
US9223627B2 (en)2013-03-272015-12-29Nice-Systems Ltd.Management of task allocation in a multi-core processing system
US20160048546A1 (en)2014-08-142016-02-18Nice-Systems LtdDetermination of prominent phrases in multi-channel interactions by multi-feature evaluations
US9294497B1 (en)2014-12-292016-03-22Nice-Systems Ltd.Method and system for behavioral and risk prediction in networks using automatic feature generation and selection using network topolgies
US20160133256A1 (en)2014-11-122016-05-12Nice-Systems LtdScript compliance in spoken documents
US9369570B1 (en)2015-05-182016-06-14Nice-Systems LtdConcurrent recordings of telephonic interactions
US20160171973A1 (en)2014-12-162016-06-16Nice-Systems LtdOut of vocabulary pattern learning
US20160180835A1 (en)2014-12-232016-06-23Nice-Systems LtdUser-aided adaptation of a phonetic dictionary
US9396448B2 (en)2013-09-102016-07-19Nice-Systems Ltd.Distributed and open schema interactions management system and method
US9420098B2 (en)2013-07-082016-08-16Nice-Systems LtdPrediction interactive vocla response
US9438733B2 (en)*2008-09-082016-09-06Invoca, Inc.Methods and systems for data transfer and campaign management
US9479727B1 (en)2015-07-152016-10-25Nice-Systems Ltd.Call recording with screen and audio correlation
US20160335244A1 (en)2015-05-142016-11-17Nice-Systems Ltd.System and method for text normalization in noisy channels
US9576204B2 (en)2015-03-242017-02-21Qognify Ltd.System and method for automatic calculation of scene geometry in crowded video scenes
US9647978B2 (en)*1999-04-012017-05-09Callwave Communications, LlcMethods and apparatus for providing expanded telecommunications service
US20170132052A1 (en)2015-11-102017-05-11Nice-Systems LtdAnalyzing and automating work-flow in a computerized user-interface
US9674362B2 (en)2015-09-292017-06-06Nice Ltd.Customer journey management
US20170192955A1 (en)2015-12-302017-07-06Nice-Systems Ltd.System and method for sentiment lexicon expansion
US9721571B2 (en)2015-06-142017-08-01Nice Ltd.System and method for voice print generation
US9787838B1 (en)2016-09-292017-10-10Nice-Systems LtdSystem and method for analysis of interactions with a customer service center
US9818115B2 (en)*2008-08-282017-11-14Paypal, Inc.Voice phone-based method and system to authenticate users

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6236365B1 (en)*1996-09-092001-05-22Tracbeam, LlcLocation of a mobile station using a plurality of commercial wireless infrastructures
US7457281B1 (en)*1996-11-152008-11-25Ele Tel, Inc.System and method for transmitting voice messages through the internet
US7983902B2 (en)*2007-08-232011-07-19Google Inc.Domain dictionary creation by detection of new topic words using divergence value comparison
US8676577B2 (en)*2008-03-312014-03-18Canyon IP Holdings, LLCUse of metadata to post process speech recognition output
US9838877B2 (en)*2008-04-022017-12-05Yougetitback LimitedSystems and methods for dynamically assessing and mitigating risk of an insured entity
US9916481B2 (en)*2008-04-022018-03-13Yougetitback LimitedSystems and methods for mitigating the unauthorized use of a device
US8577016B1 (en)*2008-09-082013-11-05RingRevenue, Inc.Methods and systems for processing and managing telephonic communications using ring pools
US9171322B2 (en)*2008-09-082015-10-27Invoca, Inc.Methods and systems for routing calls in a marketing campaign
US8781105B1 (en)*2008-09-082014-07-15Invoca, Inc.Methods and systems for processing and managing communications
US9292861B2 (en)*2008-09-082016-03-22Invoca, Inc.Methods and systems for routing calls
US20140372119A1 (en)*2008-09-262014-12-18Google, Inc.Compounded Text Segmentation
US9197736B2 (en)*2009-12-312015-11-24Digimarc CorporationIntuitive computing methods and systems
EP2405362B1 (en)*2010-07-082013-01-16STMicroelectronics (Grenoble 2) SASA connection arrangement
US8868039B2 (en)*2011-10-122014-10-21Digimarc CorporationContext-related arrangements
KR101971008B1 (en)*2012-06-292019-04-22삼성전자주식회사Control method for terminal using context-aware and terminal thereof
US9837078B2 (en)*2012-11-092017-12-05Mattersight CorporationMethods and apparatus for identifying fraudulent callers
US9148512B1 (en)*2013-10-112015-09-29Angel.Com IncorporatedRouting user communications to agents
GB2515479A (en)*2013-06-242014-12-31Nokia CorpAcoustic music similarity determiner
US9401143B2 (en)*2014-03-242016-07-26Google Inc.Cluster specific speech model
US9529898B2 (en)*2014-08-262016-12-27Google Inc.Clustering classes in language modeling
US20160071526A1 (en)*2014-09-092016-03-10Analog Devices, Inc.Acoustic source tracking and selection
US10516782B2 (en)*2015-02-032019-12-24Dolby Laboratories Licensing CorporationConference searching and playback of search results
US10326886B1 (en)*2017-08-312019-06-18Amazon Technologies, Inc.Enabling additional endpoints to connect to audio mixing device
US10194023B1 (en)*2017-08-312019-01-29Amazon Technologies, Inc.Voice user interface for wired communications system
US10003688B1 (en)*2018-02-082018-06-19Capital One Services, LlcSystems and methods for cluster-based voice verification

Patent Citations (166)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9647978B2 (en)*1999-04-012017-05-09Callwave Communications, LlcMethods and apparatus for providing expanded telecommunications service
US20080139167A1 (en)*1999-04-162008-06-12Shelia Jean BurgessCommunications Control Method And Apparatus
US20040193740A1 (en)2000-02-142004-09-30Nice Systems Ltd.Content-based storage management
US20050123115A1 (en)2000-08-282005-06-09Nice Systems, Ltd.Digital recording of ip based distributed switching platform
US8781923B2 (en)*2001-01-192014-07-15C-Sam, Inc.Aggregating a user's transactions across a plurality of service institutions
US8649516B2 (en)2001-04-112014-02-11Nice-Systems Ltd.Digital video protection for authenticity verification
US20140098954A1 (en)2001-04-112014-04-10Nice-Systems Ltd.Digital video protection for authenticity verification
US9098724B2 (en)2001-04-112015-08-04Nice-Systems Ltd.Digital video protection for authenticity verification
US20090033745A1 (en)2002-02-062009-02-05Nice Systems, Ltd.Method and apparatus for video frame sequence-based object tracking
US20060268847A1 (en)2002-06-132006-11-30Nice Systems Ltd.Voice over IP capturing
US20050018622A1 (en)2002-06-132005-01-27Nice Systems Ltd.Method for forwarding and storing session packets according to preset and /or dynamic rules
US20060133624A1 (en)2003-08-182006-06-22Nice Systems Ltd.Apparatus and method for audio content analysis, marking and summing
US8140392B2 (en)*2003-10-062012-03-20Utbk, Inc.Methods and apparatuses for pay for lead advertisements
US20070122003A1 (en)2004-01-122007-05-31Elbit Systems Ltd.System and method for identifying a threat associated person among a crowd
US20110206198A1 (en)2004-07-142011-08-25Nice Systems Ltd.Method, apparatus and system for capturing and analyzing interaction based content
US20060045185A1 (en)2004-08-312006-03-02Ramot At Tel-Aviv University Ltd.Apparatus and methods for the detection of abnormal motion in a video stream
US20060074660A1 (en)*2004-09-292006-04-06France TelecomMethod and apparatus for enhancing speech recognition accuracy by using geographic data to filter a set of words
US8532630B2 (en)*2004-11-242013-09-10Vascode Technologies Ltd.Unstructured supplementary service data application within a wireless network
US20060136597A1 (en)2004-12-082006-06-22Nice Systems Ltd.Video streaming parameter optimization and QoS
US20060179064A1 (en)2005-02-072006-08-10Nice Systems Ltd.Upgrading performance using aggregated information shared between management systems
US20060227719A1 (en)2005-03-162006-10-12Nice Systems Ltd.Third party recording of data transferred using the IP protocol
US20080154609A1 (en)2005-05-272008-06-26Nice Systems, Ltd.Method and apparatus for fraud detection
US20060285665A1 (en)2005-05-272006-12-21Nice Systems Ltd.Method and apparatus for fraud detection
US20080040110A1 (en)2005-08-082008-02-14Nice Systems Ltd.Apparatus and Methods for the Detection of Emotions in Audio Interactions
US20080181417A1 (en)2006-01-252008-07-31Nice Systems Ltd.Method and Apparatus For Segmentation of Audio Interactions
US20100199189A1 (en)2006-03-122010-08-05Nice Systems, Ltd.Apparatus and method for target oriented law enforcement interception and analysis
US20070250318A1 (en)2006-04-252007-10-25Nice Systems Ltd.Automatic speech analysis
US20090007263A1 (en)2006-05-182009-01-01Nice Systems Ltd.Method and Apparatus for Combining Traffic Analysis and Monitoring Center in Lawful Interception
US20080066184A1 (en)2006-09-132008-03-13Nice Systems Ltd.Method and system for secure data collection and distribution
US20080195387A1 (en)2006-10-192008-08-14Nice Systems Ltd.Method and apparatus for large population speaker identification in telephone interactions
US20080148397A1 (en)2006-10-262008-06-19Nice Systems Ltd.Method and apparatus for lawful interception of web based messaging communication
US20080152122A1 (en)2006-12-202008-06-26Nice Systems Ltd.Method and system for automatic quality evaluation
US20080189171A1 (en)2007-02-012008-08-07Nice Systems Ltd.Method and apparatus for call categorization
US20080195385A1 (en)2007-02-112008-08-14Nice Systems Ltd.Method and system for laughter detection
US20080228296A1 (en)2007-03-122008-09-18Nice Systems Ltd.Method and apparatus for generic analytics
US20090012826A1 (en)2007-07-022009-01-08Nice Systems Ltd.Method and apparatus for adaptive interaction analytics
US20090043573A1 (en)2007-08-092009-02-12Nice Systems Ltd.Method and apparatus for recognizing a speaker in lawful interception systems
US20090150152A1 (en)2007-11-182009-06-11Nice SystemsMethod and apparatus for fast search in call-center monitoring
US8932368B2 (en)*2008-04-012015-01-13Yougetitback LimitedMethod for monitoring the unauthorized use of a device
US8248237B2 (en)*2008-04-022012-08-21Yougetitback LimitedSystem for mitigating the unauthorized use of a device
US9031536B2 (en)*2008-04-022015-05-12Yougetitback LimitedMethod for mitigating the unauthorized use of a device
US20140200929A1 (en)*2008-04-022014-07-17Yougetitback LimitedSystems and methods for dynamically assessing and mitigating risk of an insured entity
US20090292583A1 (en)2008-05-072009-11-26Nice Systems Ltd.Method and apparatus for predicting customer churn
US20120053990A1 (en)2008-05-072012-03-01Nice Systems Ltd.System and method for predicting customer churn
US20090292541A1 (en)2008-05-252009-11-26Nice Systems Ltd.Methods and apparatus for enhancing speech analytics
US9818115B2 (en)*2008-08-282017-11-14Paypal, Inc.Voice phone-based method and system to authenticate users
US8238540B1 (en)*2008-09-082012-08-07RingRevenue, Inc.Methods and systems for processing and managing telephonic communications using ring pools
US8917860B2 (en)*2008-09-082014-12-23Invoca, Inc.Methods and systems for processing and managing communications
US9036808B2 (en)*2008-09-082015-05-19Invoca, Inc.Methods and systems for data transfer and campaign management
US9438733B2 (en)*2008-09-082016-09-06Invoca, Inc.Methods and systems for data transfer and campaign management
US8948565B2 (en)2008-09-112015-02-03Nice-Systems LtdMethod and system for utilizing storage in network video recorders
US20130343731A1 (en)2008-09-112013-12-26Nice-Systems LtdMethod and system for utilizing storage in network video recorders
US20120114314A1 (en)2008-09-112012-05-10Nice Systems Ltd.Method and system for utilizing storage in network video recorders
US20100070276A1 (en)2008-09-162010-03-18Nice Systems Ltd.Method and apparatus for interaction or discourse analytics
US20100088323A1 (en)2008-10-062010-04-08Nice Systems Ltd.Method and apparatus for visualization of interaction categorization
US8411828B2 (en)*2008-10-172013-04-02Commonwealth Intellectual Property Holdings, Inc.Intuitive voice navigation
US20100106499A1 (en)2008-10-272010-04-29Nice Systems LtdMethods and apparatus for language identification
US20100161604A1 (en)2008-12-232010-06-24Nice Systems LtdApparatus and method for multimedia content based manipulation
US20100228656A1 (en)2009-03-092010-09-09Nice Systems Ltd.Apparatus and method for fraud prevention
US20100246799A1 (en)2009-03-312010-09-30Nice Systems Ltd.Methods and apparatus for deep interaction analysis
US20110004473A1 (en)2009-07-062011-01-06Nice Systems Ltd.Apparatus and method for enhanced speech recognition
US8768313B2 (en)*2009-08-172014-07-01Digimarc CorporationMethods and systems for image or audio recognition processing
US20130057721A1 (en)2009-11-172013-03-07Nice-Systems Ltd.Automatic control of visual parameters in video processing
US8909811B2 (en)2009-12-312014-12-09Nice-Systems Ltd.Peer-to-peer telephony recording
US20130142318A1 (en)2009-12-312013-06-06Nice-Systems Ltd.Peer-to-peer telephony recording
US8660849B2 (en)*2010-01-182014-02-25Apple Inc.Prioritizing selection criteria by automated assistant
US8599836B2 (en)*2010-01-272013-12-03Neobitspeak LLCWeb-based, hosted, self-service outbound contact center utilizing speaker-independent interactive voice response and including enhanced IP telephony
US9083952B2 (en)2010-02-022015-07-14Nice Systems Ltd.System and method for relative storage of video data
US20140119438A1 (en)2010-02-022014-05-01Nice-Systems Ltd.System and method for relative storage of video data
US20110208522A1 (en)2010-02-212011-08-25Nice Systems Ltd.Method and apparatus for detection of sentiment in automated transcriptions
US20110282661A1 (en)2010-05-112011-11-17Nice Systems Ltd.Method for speaker source classification
US9002378B2 (en)2010-05-242015-04-07Nice-Systems Ltd.Method and system for estimation of mobile station velocity in a cellular system based on geographical data
US20140106776A1 (en)2010-05-242014-04-17Nice-Systems Ltd.Method and system for estimation of mobile station velocity in a cellular system based on geographical data
US20110307257A1 (en)2010-06-102011-12-15Nice Systems Ltd.Methods and apparatus for real-time interaction analysis in call centers
US20110307258A1 (en)2010-06-102011-12-15Nice Systems Ltd.Real-time application of interaction anlytics
US20120116766A1 (en)2010-11-072012-05-10Nice Systems Ltd.Method and apparatus for large vocabulary continuous speech recognition
US20120155663A1 (en)2010-12-162012-06-21Nice Systems Ltd.Fast speaker hunting in lawful interception systems
US20120209606A1 (en)2011-02-142012-08-16Nice Systems Ltd.Method and apparatus for information extraction from interactions
US20120209605A1 (en)2011-02-142012-08-16Nice Systems Ltd.Method and apparatus for data exploration of interactions
US20120215535A1 (en)2011-02-232012-08-23Nice Systems Ltd.Method and apparatus for automatic correlation of multi-channel interactions
US20120296642A1 (en)2011-05-192012-11-22Nice Systems Ltd.Method and appratus for temporal speech scoring
US20140114891A1 (en)2011-07-192014-04-24Nice-Systems Ltd.Distributed scalable incrementally updated models in decisioning systems
US9524472B2 (en)2011-07-192016-12-20Nice Technologies Uk LimitedDistributed scalable incrementally updated models in decisioning systems
US8847285B2 (en)*2011-09-262014-09-30Semiconductor Components Industries, LlcDepleted charge-multiplying CCD image sensor
US8914314B2 (en)2011-09-282014-12-16Nice Systems Technologies Uk LimitedOnline temporal difference learning from incomplete customer interaction histories
US9367820B2 (en)2011-09-282016-06-14Nice Systems Technologies Uk LimitedOnline temporal difference learning from incomplete customer interaction histories
US8909590B2 (en)2011-09-282014-12-09Nice Systems Technologies Uk LimitedOnline asynchronous reinforcement learning from concurrent customer histories
US8924318B2 (en)2011-09-282014-12-30Nice Systems Technologies Uk LimitedOnline asynchronous reinforcement learning from concurrent customer histories
US20140050309A1 (en)2011-11-222014-02-20Nice Systems Ltd.System and method for real-time customized agent training
US9401990B2 (en)2011-11-222016-07-26Nice-Systems Ltd.System and method for real-time customized agent training
US20140122311A1 (en)2012-05-092014-05-01Nice-Systems Ltd.System and method for determining a risk root cause
US20130315382A1 (en)2012-05-242013-11-28Nice Systems Ltd.System and method for robust call center operation using multiple data centers
US9054969B2 (en)2012-06-152015-06-09Nice-Systems Ltd.System and method for situation-aware IP-based communication interception and intelligence extraction
US20130336137A1 (en)2012-06-152013-12-19Nice Systems Ltd.System and method for situation-aware ip-based communication interception and intelligence extraction
US20140025376A1 (en)2012-07-172014-01-23Nice-Systems LtdMethod and apparatus for real time sales optimization based on audio interactions analysis
US20140049600A1 (en)2012-08-162014-02-20Nice-Systems Ltd.Method and system for improving surveillance of ptz cameras
US20140067373A1 (en)2012-09-032014-03-06Nice-Systems LtdMethod and apparatus for enhanced phonetic indexing and search
US20140129299A1 (en)2012-11-062014-05-08Nice-Systems LtdMethod and apparatus for detection and analysis of first contact resolution failures
US20140149488A1 (en)2012-11-262014-05-29Nice-Systems Ltd.System and method for engaging a mobile device
US9167093B2 (en)2012-11-282015-10-20Nice-Systems Ltd.System and method for real-time process management
US20140146958A1 (en)2012-11-282014-05-29Nice-Systems Ltd.System and method for real-time process management
US20140172859A1 (en)2012-12-132014-06-19Nice-Systems LtdMethod and apparatus for trade interaction chain reconstruction
US9430800B2 (en)2012-12-132016-08-30Nice-Systems LtdMethod and apparatus for trade interaction chain reconstruction
US20140257820A1 (en)2013-03-102014-09-11Nice-Systems LtdMethod and apparatus for real time emotion detection in audio interactions
US9093081B2 (en)2013-03-102015-07-28Nice-Systems LtdMethod and apparatus for real time emotion detection in audio interactions
US20140280172A1 (en)2013-03-132014-09-18Nice-Systems Ltd.System and method for distributed categorization
US9489445B2 (en)2013-03-132016-11-08Nice Systems Ltd.System and method for distributed categorization
US9491222B2 (en)2013-03-132016-11-08Nice-Systems Ltd.System and method for interoperability between flex applications and .NET applications
US20140280431A1 (en)2013-03-132014-09-18Nice-Systems Ltd.System and method for interoperability between flex applications and .net applications
US20140270118A1 (en)2013-03-142014-09-18Nice-Systems LtdMethod and apparatus for fail-safe control of recordings
US9179000B2 (en)2013-03-142015-11-03Nice-Systems LtdMethod and apparatus for fail-safe control of recordings
US9223627B2 (en)2013-03-272015-12-29Nice-Systems Ltd.Management of task allocation in a multi-core processing system
US20140330563A1 (en)2013-05-022014-11-06Nice-Systems Ltd.Seamless authentication and enrollment
US9620123B2 (en)2013-05-022017-04-11Nice Ltd.Seamless authentication and enrollment
US20140328512A1 (en)2013-05-052014-11-06Nice Systems Ltd.System and method for suspect search
US9471849B2 (en)2013-05-052016-10-18Qognify Ltd.System and method for suspect search
US8913880B1 (en)2013-06-232014-12-16Nice-Systems Ltd.Method and apparatus for managing video storage
US9420098B2 (en)2013-07-082016-08-16Nice-Systems LtdPrediction interactive vocla response
US9614862B2 (en)2013-07-242017-04-04Nice Ltd.System and method for webpage analysis
US20150033331A1 (en)2013-07-242015-01-29Nice-Systems Ltd.System and method for webpage analysis
US20150032448A1 (en)2013-07-252015-01-29Nice-Systems LtdMethod and apparatus for expansion of search queries on large vocabulary continuous speech recognition transcripts
US9245523B2 (en)2013-07-252016-01-26Nice-Systems LtdMethod and apparatus for expansion of search queries on large vocabulary continuous speech recognition transcripts
US9053046B2 (en)2013-09-082015-06-09Nice-Systems LtdEdge devices settings via external source
US20150074296A1 (en)2013-09-082015-03-12Nice-Systems Ltd.Edge devices settings via external source
US9396448B2 (en)2013-09-102016-07-19Nice-Systems Ltd.Distributed and open schema interactions management system and method
US20150098561A1 (en)2013-10-082015-04-09Nice-Systems Ltd.System and method for real-time monitoring of a contact center using a mobile computer
US9405786B2 (en)2013-10-222016-08-02Nice-Systems Ltd.System and method for database flow management
US20150112947A1 (en)2013-10-222015-04-23Nice Systems Ltd.System and method for database flow management
US20150142704A1 (en)*2013-11-202015-05-21Justin LondonAdaptive Virtual Intelligent Agent
US9538207B2 (en)2013-12-052017-01-03Qognify Ltd.Method and apparatus for managing video storage
US20150163532A1 (en)2013-12-052015-06-11Nice-Systems Ltd.Method and apparatus for managing video storage
US9854208B2 (en)2013-12-292017-12-26Qognify Ltd.System and method for detecting an object of interest
US20150189240A1 (en)2013-12-292015-07-02Nice-Systems Ltd.System and method for detecting an object of interest
US9197744B2 (en)2013-12-312015-11-24Nice-Systems Ltd.Call recording with interaction metadata correlation
US20150189078A1 (en)2013-12-312015-07-02Nice-Systems Ltd.Call recording with interaction metadata correlation
US20150227437A1 (en)2014-02-102015-08-13Nice-Systems Ltd.Storage device as buffer for unreliable storage
US9262276B2 (en)2014-02-102016-02-16Qognify Ltd.Storage device as buffer for unreliable storage
US20150235159A1 (en)2014-02-202015-08-20Nice-Systems Ltd.System and method for contact center agent guidance with feedback
US9747167B2 (en)2014-02-272017-08-29Nice Ltd.Persistency free architecture
US20150242285A1 (en)2014-02-272015-08-27Nice-Systems Ltd.Persistency free architecture
US20150254233A1 (en)2014-03-062015-09-10Nice-Systems LtdText-based unsupervised learning of language models
US9098361B1 (en)2014-03-192015-08-04Nice-Systems Ltd.System and method for interoperability between an embedded activex control and an external application
US20150271330A1 (en)2014-03-212015-09-24Nice-Systems Ltd.System and method for recording and reporting compliance data
US9564122B2 (en)2014-03-252017-02-07Nice Ltd.Language model adaptation based on filtered data
US20150278192A1 (en)2014-03-252015-10-01Nice-Systems LtdLanguage model adaptation based on filtered data
US20150293755A1 (en)2014-04-092015-10-15Nice-Systems Ltd.System and automated method for configuring a predictive model and deploying it on a target platform
US9374315B2 (en)2014-04-112016-06-21Nice-Systems Ltd.Spare resource election in a computing system
US20150295850A1 (en)2014-04-112015-10-15Nice-Systems Ltd.Spare resource election in a computing system
US20150334234A1 (en)2014-05-152015-11-19Nice-Systems LtdStructured storage management for interactions recording
US9256596B2 (en)2014-06-182016-02-09Nice-Systems LtdLanguage model adaptation for specific texts
US20150370784A1 (en)2014-06-182015-12-24Nice-Systems LtdLanguage model adaptation for specific texts
US20160048546A1 (en)2014-08-142016-02-18Nice-Systems LtdDetermination of prominent phrases in multi-channel interactions by multi-feature evaluations
US20160133256A1 (en)2014-11-122016-05-12Nice-Systems LtdScript compliance in spoken documents
US20160171973A1 (en)2014-12-162016-06-16Nice-Systems LtdOut of vocabulary pattern learning
US9607618B2 (en)2014-12-162017-03-28Nice-Systems LtdOut of vocabulary pattern learning
US20160180835A1 (en)2014-12-232016-06-23Nice-Systems LtdUser-aided adaptation of a phonetic dictionary
US9294497B1 (en)2014-12-292016-03-22Nice-Systems Ltd.Method and system for behavioral and risk prediction in networks using automatic feature generation and selection using network topolgies
US9576204B2 (en)2015-03-242017-02-21Qognify Ltd.System and method for automatic calculation of scene geometry in crowded video scenes
US20160335244A1 (en)2015-05-142016-11-17Nice-Systems Ltd.System and method for text normalization in noisy channels
US9369570B1 (en)2015-05-182016-06-14Nice-Systems LtdConcurrent recordings of telephonic interactions
US9721571B2 (en)2015-06-142017-08-01Nice Ltd.System and method for voice print generation
US9479727B1 (en)2015-07-152016-10-25Nice-Systems Ltd.Call recording with screen and audio correlation
US9674362B2 (en)2015-09-292017-06-06Nice Ltd.Customer journey management
US20170132052A1 (en)2015-11-102017-05-11Nice-Systems LtdAnalyzing and automating work-flow in a computerized user-interface
US20170192955A1 (en)2015-12-302017-07-06Nice-Systems Ltd.System and method for sentiment lexicon expansion
US9787838B1 (en)2016-09-292017-10-10Nice-Systems LtdSystem and method for analysis of interactions with a customer service center

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10205823B1 (en)*2018-02-082019-02-12Capital One Services, LlcSystems and methods for cluster-based voice verification
US20190245967A1 (en)*2018-02-082019-08-08Capital One Services, LlcSystems and methods for cluster-based voice verification
US10412214B2 (en)*2018-02-082019-09-10Capital One Services, LlcSystems and methods for cluster-based voice verification
US10574812B2 (en)*2018-02-082020-02-25Capital One Services, LlcSystems and methods for cluster-based voice verification
CN109346089A (en)*2018-09-272019-02-15深圳市声扬科技有限公司Living body identity identifying method, device, computer equipment and readable storage medium storing program for executing
US10477021B1 (en)*2018-11-292019-11-12Capital One Services, LlcSystems for detecting harassing communication
US11375062B2 (en)2018-11-292022-06-28Capital One Services, LlcSystems for detecting harassing communication
US11025777B2 (en)2018-11-292021-06-01Capital One Services, LlcSystems for detecting harassing communication
US11800014B2 (en)*2019-07-302023-10-24Nice LtdMethod and system for proactive fraudster exposure in a customer service channel
US20210258423A1 (en)*2019-07-302021-08-19Nice LtdMethod and system for proactive fraudster exposure in a customer service channel
CN110942783A (en)*2019-10-152020-03-31国家计算机网络与信息安全管理中心Group call type crank call classification method based on audio multistage clustering
CN110942783B (en)*2019-10-152022-06-17国家计算机网络与信息安全管理中心Group call type crank call classification method based on audio multistage clustering
CN110648670B (en)*2019-10-222021-11-26中信银行股份有限公司Fraud identification method and device, electronic equipment and computer-readable storage medium
CN110648670A (en)*2019-10-222020-01-03中信银行股份有限公司Fraud identification method and device, electronic equipment and computer-readable storage medium
CN110970036A (en)*2019-12-242020-04-07网易(杭州)网络有限公司Voiceprint recognition method and device, computer storage medium and electronic equipment
CN112687278A (en)*2020-12-032021-04-20科大讯飞股份有限公司Identity verification method, electronic equipment and storage device
CN112687278B (en)*2020-12-032022-09-06科大讯飞股份有限公司Identity verification method, electronic equipment and storage device

Also Published As

Publication numberPublication date
CA3031819C (en)2021-07-13
CA3031819A1 (en)2019-04-05
CA3115632C (en)2022-08-30
CA3115632A1 (en)2019-04-05
US20190356776A1 (en)2019-11-21
US10091352B1 (en)2018-10-02
US10412214B2 (en)2019-09-10
EP3525209A1 (en)2019-08-14
US10574812B2 (en)2020-02-25
EP3525209B1 (en)2020-11-25
US20190245967A1 (en)2019-08-08
US10205823B1 (en)2019-02-12

Similar Documents

PublicationPublication DateTitle
US10574812B2 (en)Systems and methods for cluster-based voice verification
JP7210634B2 (en) Voice query detection and suppression
US12015637B2 (en)Systems and methods for end-to-end architectures for voice spoofing detection
US11983259B2 (en)Authentication via a dynamic passphrase
US9830440B2 (en)Biometric verification using predicted signatures
WO2018166187A1 (en)Server, identity verification method and system, and a computer-readable storage medium
US20230401338A1 (en)Method for detecting an audio adversarial attack with respect to a voice input processed by an automatic speech recognition system, corresponding device, computer program product and computer-readable carrier medium
WO2019227583A1 (en)Voiceprint recognition method and device, terminal device and storage medium
US9646613B2 (en)Methods and systems for splitting a digital signal
US20120143608A1 (en)Audio signal source verification system
US11841932B2 (en)System and method for updating biometric evaluation systems
US20230239290A1 (en)Systems and methods for coherent and tiered voice enrollment
US20250046317A1 (en)Methods and systems for authenticating users
EP4506838A1 (en)Methods and systems for authenticating users
CN115116447A (en) A method, device and electronic device for acquiring audio registration information
CN117334201A (en)Voice recognition method, device, equipment and medium
CN115034904A (en)Transaction admission auditing method, apparatus, device, medium and program product
GB2637501A (en)An authentication system and method

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp