Movatterモバイル変換


[0]ホーム

URL:


CN109767765A - Vocabulary matching method and device, storage medium, and computer equipment - Google Patents

Vocabulary matching method and device, storage medium, and computer equipment
Download PDF

Info

Publication number
CN109767765A
CN109767765ACN201910045130.XACN201910045130ACN109767765ACN 109767765 ACN109767765 ACN 109767765ACN 201910045130 ACN201910045130 ACN 201910045130ACN 109767765 ACN109767765 ACN 109767765A
Authority
CN
China
Prior art keywords
mood
voice signal
text
emotion
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910045130.XA
Other languages
Chinese (zh)
Inventor
孙强
商文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co LtdfiledCriticalPing An Technology Shenzhen Co Ltd
Priority to CN201910045130.XApriorityCriticalpatent/CN109767765A/en
Publication of CN109767765ApublicationCriticalpatent/CN109767765A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

This application discloses a kind of words art matching process and device, storage medium, computer equipments, are related to technical field of information processing.Wherein method includes: the voice signal for obtaining client, carries out speech recognition to voice signal based on neural network model, obtains the corresponding text of voice signal;The keyword in text is extracted, the first mood of client is determined according to the keyword extracted;Emotion identification is carried out to voice signal based on neural network model, obtains corresponding second mood of voice signal;The corresponding mood of voice signal is determined according to the first mood, the second mood and default rule;From the corresponding answer words art of the reply data library lookup and text that pre-establish and/or it is corresponding with mood pacify words art, text, the corresponding relationship for answering words art and/or mood, the corresponding relationship for pacifying words art are preserved in reply data library.The application avoids contact staff due to lacking the case where experience leads to confusing communication, improves service quality.

Description

Talk about art matching process and device, storage medium, computer equipment
Technical field
This application involves technical field of information processing, are based on words art matching process and device particularly with regard to one kind, depositStorage media, computer equipment.
Background technique
With the development of network, dialogue between client and enterprise is developed to by aspectant consulting based on network, phoneEtc. means exchange.Client service center based on phone becomes the important channel that enterprise interacts with user.
Customer service work belongs to labour-intensive service field, and with the developing of all kinds of business, incident is customer service peopleThe increase of the labor intensity of member, and contact staff is required to improve the quality of itself with the quality for the service of improving.
However, the customer service work of high quality not only requires nothing more than contact staff and is familiar with various businesses process and specification, also requireContact staff has certain psychological skill, and contact staff leads to confusing communication probably because lacking experience, causes when seriousUpgrading is complained, service quality is influenced.
Summary of the invention
In view of this, this application provides a kind of words art matching process and device, storage medium, computer equipments, mainlyPurpose is to solve the problems, such as how to improve customer service quality.
According to the one aspect of the application, a kind of words art matching process is provided, this method comprises:
The voice signal for obtaining client carries out speech recognition to the voice signal based on neural network model, obtains instituteThe corresponding text of predicate sound signal;
The keyword in the text is extracted, the first mood of client is determined according to the keyword extracted;
Emotion identification is carried out to the voice signal based on neural network model, obtains the voice signal corresponding secondMood;
The corresponding mood of the voice signal is determined according to first mood, the second mood and default rule;
It is corresponding from the reply data library lookup pre-established corresponding with text answer words art and/or with the moodPacify words art, text, the corresponding relationship for answering words art and/or mood are preserved in the reply data library, pacifies words artCorresponding relationship.
Optionally, speech recognition is carried out to the voice signal, the voice signal pair is obtained based on neural network modelThe text answered, comprising:
Acoustic model is constructed, wherein the acoustic model includes phoneme training pattern and the mixing based on memory unit connectionNeural network model;
The acoustic feature is input to the acoustic model by the acoustic feature for extracting the voice signal;
Phoneme recognition is carried out to the acoustic feature by the phoneme training pattern of trained completion, obtains phoneme recognitionAs a result;
Text region is carried out by the hybrid production style based on memory unit connection of trained completion, is obtainedText corresponding with the voice signal.
Optionally, the acoustic feature for extracting the voice signal, comprising:
Fourier transformation is carried out to the voice signal, the voice signal of time domain is converted to the energy spectrum of frequency domain;
The energy spectrum is inputted into triangular filter group, obtains the logarithmic energy of the triangular filter group output;
The acoustic feature that discrete cosine transform obtains the voice signal is carried out to the logarithmic energy.
Optionally, the second mood acquiring unit is further used for:
Multiple trained audios are obtained, the first acoustic feature vector sum first sample entropy feature of the trained audio is extracted,First sample entropy feature described in the first acoustic feature vector sum of each trained audio is merged respectively, is obtainedFirst emotion sound spectrum vector of each trained audio;
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector, obtains the second emotion sound spectrum vector;
The second emotion sound spectrum vector of the corresponding trained audio of various moods is inputted into nerve net respectivelyNetwork model is trained, and obtains the corresponding sound spectrum vector mood model of various moods and trained template library is added;
The the second acoustics feature vector and the second Sample Entropy feature for extracting the voice signal, by second acoustic featureSecond Sample Entropy feature described in vector sum is merged, and the third emotion sound spectrum vector of the voice signal is obtained, byThree emotion sound spectrum vectors are compared and calculate with each sound spectrum vector mood model in the trained template libraryMood model matching degree exports corresponding second mood of maximum mood model matching degree.
Optionally, described that dimension-reduction treatment is carried out to the first emotion sound spectrum vector, it is special to obtain the second emotion sound spectrumLevy vector, comprising:
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector using principal component analysis PCA algorithm, obtains secondEmotion sound spectrum vector.
Optionally, the neural network model is backpropagation BP neural network model.
Optionally, the method also includes:
Emotion identification is carried out to the voice signal of client and customer service dialogue overall process, generates emotional curve;
Service satisfaction is determined according to the emotional curve.
According to the another aspect of the application, a kind of words art coalignment is provided, which includes:
Voice recognition unit, for obtaining the voice signal of client, based on neural network model to the voice signal intoRow speech recognition obtains the corresponding text of the voice signal;
First mood determination unit determines visitor according to the keyword extracted for extracting the keyword in the textFirst mood at family;
Second mood acquiring unit is obtained for carrying out Emotion identification to the voice signal based on neural network modelCorresponding second mood of the voice signal;
Mood determination unit, for determining the voice according to first mood, the second mood and default ruleThe corresponding mood of signal;
Art matching unit is talked about, for talking about art from the reply data library lookup answer corresponding with the text pre-establishedAnd/or the corresponding relationship pacified words art, text is preserved in the reply data library, answers words art corresponding with the moodAnd/or mood, the corresponding relationship for pacifying words art.
Optionally, the voice recognition unit is further used for:
Acoustic model is constructed, wherein the acoustic model includes phoneme training pattern and the mixing based on memory unit connectionNeural network model;
The acoustic feature is input to the acoustic model by the acoustic feature for extracting the voice signal;
Phoneme recognition is carried out to the acoustic feature by the phoneme training pattern of trained completion, obtains phoneme recognitionAs a result;
Text region is carried out by the hybrid production style based on memory unit connection of trained completion, is obtainedText corresponding with the voice signal.
Optionally, the acoustic feature for extracting the voice signal, comprising:
Fourier transformation is carried out to the voice signal, the voice signal of time domain is converted to the energy spectrum of frequency domain;
The energy spectrum is inputted into triangular filter group, obtains the logarithmic energy of the triangular filter group output;
The acoustic feature that discrete cosine transform obtains the voice signal is carried out to the logarithmic energy.
Optionally, the second mood acquiring unit is further used for:
Multiple trained audios are obtained, the first acoustic feature vector sum first sample entropy feature of the trained audio is extracted,First sample entropy feature described in the first acoustic feature vector sum of each trained audio is merged respectively, is obtainedFirst emotion sound spectrum vector of each trained audio;
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector, obtains the second emotion sound spectrum vector;
The second emotion sound spectrum vector of the corresponding trained audio of various moods is inputted into nerve net respectivelyNetwork model is trained, and obtains the corresponding sound spectrum vector mood model of various moods and trained template library is added;
The the second acoustics feature vector and the second Sample Entropy feature for extracting the voice signal, by second acoustic featureSecond Sample Entropy feature described in vector sum is merged, and the third emotion sound spectrum vector of the voice signal is obtained, byThree emotion sound spectrum vectors are compared and calculate with each sound spectrum vector mood model in the trained template libraryMood model matching degree exports corresponding second mood of maximum mood model matching degree.
Optionally, described that dimension-reduction treatment is carried out to the first emotion sound spectrum vector, it is special to obtain the second emotion sound spectrumLevy vector, comprising:
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector using principal component analysis PCA algorithm, obtains secondEmotion sound spectrum vector.
Optionally, the neural network model is backpropagation BP neural network model.
Optionally, described device further include:
Emotional curve generation unit carries out Emotion identification for the voice signal to client and customer service dialogue overall process, rawAt emotional curve;
Service satisfaction determination unit determines service satisfaction according to the emotional curve.
According to the application another aspect, a kind of storage medium is provided, computer program, described program are stored thereon withAbove-mentioned words art matching process is realized when being executed by processor.
According to the application another aspect, a kind of computer equipment is provided, including storage medium, processor and be stored inOn storage medium and the computer program that can run on a processor, the processor realize above-mentioned words art when executing described programMatching process.
By above-mentioned technical proposal, a kind of method and device provided by the present application, storage medium, computer equipment pass throughSpeech recognition and/or Emotion identification are carried out to the voice signal of client, provide answer words art to contact staff and/or pacify wordsArt avoids contact staff due to lacking the case where experience leads to confusing communication, improves service quality.Also, the application is also rightThe voice signal of client and customer service dialogue overall process carries out Emotion identification, generates emotional curve, is determined and serviced according to emotional curveSatisfaction can further promote customer service quality.
Above description is only the general introduction of technical scheme, in order to better understand the technological means of the application,And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects, features and advantages of the application canIt is clearer and more comprehensible, below the special specific embodiment for lifting the application.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this ShenIllustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 shows a kind of flow diagram for talking about art matching process provided by the embodiments of the present application;
Fig. 2 shows the flow diagrams of another words art matching process provided by the embodiments of the present application;
Fig. 3 shows a kind of structural schematic diagram for talking about art coalignment provided by the embodiments of the present application.
Specific embodiment
The application is described in detail below with reference to attached drawing and in conjunction with the embodiments.It should be noted that not conflictingIn the case of, the features in the embodiments and the embodiments of the present application can be combined with each other.
Current contact staff probably because lack experience cause confusing communication, influence service quality aiming at the problem that.The embodiment of the present application provides a kind of words art matching process, as shown in Figure 1, this method comprises:
S11: obtaining the voice signal of client, carries out speech recognition to the voice signal based on neural network model, obtainsObtain the corresponding text of the voice signal;
It should be noted that speech recognition is the skill for allowing computer voice signal to be changed by identification process textArt, it is therefore an objective to identify content described in client.
S12: extracting the keyword in the text, and the first mood of client is determined according to the keyword extracted;
It will be appreciated that the mood of client can be presented by specific keyword, the embodiment of the present application is by extracting clientKeyword in voice signal can primarily determine the first mood of client.When include in the keyword extracted " too slow " " tooThe keywords such as difference ", it may be determined that the first mood of client is indignation.
S13: Emotion identification is carried out to the voice signal based on neural network model, it is corresponding to obtain the voice signalSecond mood.
It will be appreciated that sound is the important carrier of information, it is the important channel of person to person's exchange, the sound of people not only wrapsVoice content information is contained, has further included emotional information.Emotion identification is the processing for the emotional information in voice signal, people'sEmotion variation can be reacted by the acoustic feature extracted in voice.
S14: the corresponding feelings of the voice signal are determined according to first mood, the second mood and default ruleThread;
It will be appreciated that the emotional information that the embodiment of the present application includes according to the keyword and voice signal itself extractedIt determines the first mood and the second mood respectively, and then (for example the first mood and the second mood is assigned not according to default ruleSame weight) determine the corresponding mood of voice signal, improve the accuracy of determining mood.
For example, the embodiment of the present application carries out the scoring of 1-100 to mood, and different scoring ranges corresponds to different feelingsThread.Such as 1-10 shows gladness, 11-30 indicates glad, and 31-50 meaning with thumb down, 51-60 indicates very dissatisfied, 61-80 indicate indignation, and 81-100 indicates very angry.
When determining the first mood, different keywords corresponds to different scorings, and different scorings characterizes different moods.When including keywords such as " too slow " " too poor ", ' mood scores 70, it may be determined that the first mood of client in the keyword extractedFor indignation.It similarly, is indignation carrying out the second mood that Emotion identification determines to voice signal based on neural network model, it is rightThe ' mood scores answered are 65.Further, different weights is assigned to the first mood and the second mood by Principal Component Analysis,Such as the weight of the first mood is 0.6, the weight of the second mood is 0.4, then the scoring of the mood finally determined is 70 × 0.6+ 65 × 0.4=68, corresponding mood are indignation.
S15: from the reply data library lookup that pre-establishes answer words art corresponding with the text and/or with the moodIt is corresponding to pacify words art.
It should be noted that preserved in the reply data library text, answer words art corresponding relationship and/or mood,Pacify the corresponding relationship of words art.The text is subjected to word segmentation processing, keyword is obtained, from the reply data library lookup pre-establishedArt is talked about in answer corresponding with keyword.Client is determined by the mood saved in reply data library and the corresponding relationship for pacifying words artCurrent mood.
It will be appreciated that telephone service needs to handle the complaint of customer, and voice system can automatically detect this feelingsThread reminds contact staff to pay attention to attitude, provides and pacify accordingly in time.Art coalignment if the embodiment of the present application leads toIt crosses and speech recognition and/or Emotion identification is carried out to the voice signal of client, provide answer words art to contact staff and/or pacify wordsArt avoids contact staff due to lacking the case where experience leads to confusing communication, improves service quality.
In practical applications, feature is extracted to voice signal, passes through weighted finite state machine (WeightedFinite State Transducer, WFST) voice messaging converted corresponding text information by network, to complete speech recognitionProcess.But this audio recognition method recognition accuracy is lower.In order to improve the accuracy rate of speech recognition, implement in the applicationIt is similar with the method in Fig. 1, wherein speech recognition, base are carried out to the voice signal in another words art matching process of exampleThe corresponding text of the voice signal is obtained in neural network model, comprising:
Acoustic model is constructed, wherein the acoustic model includes phoneme training pattern and the mixing based on memory unit connectionNeural network model;
The acoustic feature is input to the acoustic model by the acoustic feature for extracting the voice signal;
Phoneme recognition is carried out to the acoustic feature by the phoneme training pattern of trained completion, obtains phoneme recognitionAs a result;
Text region is carried out by the hybrid production style based on memory unit connection of trained completion, is obtainedText corresponding with the voice signal.
It will be appreciated that acoustic feature different in the sound of people characterizes different moods, recognition of speech signals can be passed throughAcoustic feature, using trained hybrid production style to voice signal carry out Emotion identification.
Preferably, before the acoustic feature for extracting the voice signal, this method further includes carrying out to voice signal pre-Processing, wherein pretreatment specifically includes: sampling processing and/or preemphasis processing and/or pre-filtering processing and/or windowing processAnd/or endpoint detection processing.
Hybrid production style by the trained completion based on memory unit connection is according to receivingRecognition result exports the text information opposite with the voice signal, is spoken by extracting after first pre-processing to primary speech signalIt learns feature and speech recognition is carried out by acoustic model again, improve the accuracy of speech recognition.
Optionally, the acoustic feature for extracting the voice signal, comprising:
Fourier transformation is carried out to the voice signal, the voice signal of time domain is converted to the energy spectrum of frequency domain;
The energy spectrum is inputted into triangular filter group, obtains the logarithmic energy of the triangular filter group output;
The acoustic feature that discrete cosine transform obtains the voice signal is carried out to the logarithmic energy.
It will be appreciated that carrying out Fourier transformation to voice signal, the voice signal of time domain is converted to the energy of frequency domainAmount spectrum;The energy spectrum is passed through to the triangular filter group of one group of Meier scale, the formant feature of prominent voice.It calculates eachThe logarithmic energy of filter group output.Logarithmic energy calculating after, by the triangular filter group output energy frequency spectrum pass through fromScattered cosine transform just can be obtained MFCC coefficient (mel frequency cepstrum coefficient), that is, MFCC acoustics is specialSign.
Specifically, Emotion identification is carried out to the voice signal, the voice signal pair is obtained based on neural network modelThe mood answered, comprising:
Multiple trained audios are obtained, the first acoustic feature vector sum first sample entropy feature of the trained audio is extracted,First sample entropy feature described in the first acoustic feature vector sum of each trained audio is merged respectively, is obtainedFirst emotion sound spectrum vector of each trained audio;
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector, obtains the second emotion sound spectrum vector;
The second emotion sound spectrum vector of the corresponding trained audio of various moods is inputted into nerve net respectivelyNetwork model is trained, and obtains the corresponding sound spectrum vector mood model of various moods and trained template library is added;
The the second acoustics feature vector and the second Sample Entropy feature for extracting the voice signal, by second acoustic featureSecond Sample Entropy feature described in vector sum is merged, and the third emotion sound spectrum vector of the voice signal is obtained, byThree emotion sound spectrum vectors are compared and calculate with each sound spectrum vector mood model in the trained template libraryMood model matching degree exports the corresponding mood of maximum mood model matching degree.
It should be noted that Sample Entropy is a kind of new time series Complexity Measurement method, data vector is defined as in mDimension continues to keep the conditional probability of its similitude when increasing to m+1 dimension;Respectively by the first acoustic feature of each trained audio toAmount and first sample entropy feature carry out fusion be in order to acoustic feature vector emotion mood carry out various dimensions extraction, Sample EntropyValue is bigger, and the probability for generating new information is bigger, and sequence is more complicated, can pass through the voice signal dynamic change degree of different emotionsTo distinguish emotional category.
Preferably, described that dimension-reduction treatment is carried out to the first emotion sound spectrum vector, it is special to obtain the second emotion sound spectrumLevy vector, comprising:
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector using principal component analysis PCA algorithm, obtains secondEmotion sound spectrum vector.
Optionally, the neural network model is backpropagation BP neural network model.
Preferably, the second emotion sound spectrum vector of the corresponding trained audio of various moods is inputted respectivelyNeural network model is trained, and is specifically included:
Network parameter initialization is carried out to backpropagation BP neural network model, wherein the network parameter includes: connectionWeight, connection threshold value, maximum study number, error precision;
Respectively by the reversed biography of the second emotion sound spectrum vector input of the corresponding trained audio of various moodsBP neural network model is broadcast to be trained.
It should be noted that backpropagation BP neural network (Back Propagation) is that one kind is calculated by error Back-PropagationThe Multi-layered Feedforward Networks of method training have the function of realizing any complex nonlinear mapping, with self-learning function and with oneThe advantages of fixed popularization, abstract ability, it can be used for pattern-recognition.
It should be noted that since BP neural network neuron node is numerous, when carrying out the calculating of output neuron node,It if the dimension of input neuron is excessive, will lead to computationally intensive, so that the building of BP neural network be made to complicate, reduce and trainEfficiency, therefore, it is necessary to the first emotion sound spectrum vector carry out dimension-reduction treatment, and PCA algorithm (principal component analysis,Principal ComponentAnalysis) it is a kind of method for being used to analyze data in multi-variate statistical analysis, it is with one kindSmall number of feature is described sample to reach the method for reducing feature space dimension, and the embodiment of the present application uses PCAAlgorithm carries out dimension-reduction treatment to the first emotion sound spectrum vector.
The embodiment of the present application is by carrying out the first acoustic feature vector sum first sample entropy feature of each trained audioFusion obtains the emotion sound spectrum vector of each trained audio, and the value of Sample Entropy is bigger, and the probability for generating new information is bigger, sequenceColumn are more complicated, can distinguish emotional category by the voice signal dynamic change degree of different emotions, ensure that emotional semantic classificationPerformance, improve mood classification accuracy rate;Meanwhile BP neural network is a kind of multilayer by Back Propagation Algorithm trainingFeedforward network has the function of realizing any complex nonlinear mapping, with self-learning function and with certain popularization, summaryIt is the advantages of ability, special to emotion sound spectrum using PCA algorithm before using BP neural network to the modeling of emotion sound spectrum vectorIt levies vector and carries out dimension-reduction treatment, so that the dimension for inputting neuron is reduced, and is subtracted when carrying out the calculating of output neuron nodeLack the calculation amount of BP output neuron, to simplify the building of BP neural network, improves training effectiveness;And it is a variety ofAccurately identifying for emotional characteristics may be implemented in the sound spectrum vector comprehensive matching of classification, improve Emotion identification flexibility,Convenience, tightness and recognition efficiency, can better adapt to the demand in intelligent hardware future, it is sustainable to complexity increasinglyThe intelligent hardware of growth completely, rapidly configure, and solves current Emotion identification complex disposal process, realizes difficultyThe technical issues of height, accuracy rate is low, low efficiency.
Fig. 2 shows the flow diagrams of another words art matching process provided by the embodiments of the present application, as shown in Fig. 2,This method comprises:
S21: obtaining the voice signal of client, carries out speech recognition to the voice signal based on neural network model, obtainsObtain the corresponding text of the voice signal;
S22: extracting the keyword in the text, and the first mood of client is determined according to the keyword extracted;
S23: Emotion identification is carried out to the voice signal based on neural network model, it is corresponding to obtain the voice signalSecond mood;
S24: the corresponding feelings of the voice signal are determined according to first mood, the second mood and default ruleThread;
S25: from the reply data library lookup that pre-establishes answer words art corresponding with the text and/or with the moodIt is corresponding to pacify words art;
S26: Emotion identification is carried out to the voice signal of client and customer service dialogue overall process, generates emotional curve;
It will be appreciated that the mood of the embodiment of the present application tracking description client, generates emotional curve, emotional curve characterization shouldThe curve of client's emotional change within the period talked with client.
S27: service satisfaction is determined according to the emotional curve.
It should be noted that the embodiment of the present application calculates service satisfaction according to emotional curve according to default rule, it is rightThe service quality of customer service is evaluated.
Wherein, the detailed process of step S21-S25 is similar in Fig. 1, and details are not described herein.
Client is required to evaluate this service at the end of service in traditional customer service system, however many clientsIt is unwilling to be evaluated, evaluation information is caused to lack.Voice signal of the embodiment of the present application to client and customer service dialogue overall processEmotion identification is carried out, emotional curve is generated, service satisfaction is determined according to emotional curve.Service satisfaction is fed back to visitor againClothes, to promote customer service quality, play the role of supervising again.
Fig. 3 shows a kind of structural schematic diagram for talking about art coalignment provided by the embodiments of the present application.As shown in figure 3, thisApplication embodiment device include:
Voice recognition unit 31, for obtaining the voice signal of client, based on neural network model to the voice signalSpeech recognition is carried out, the corresponding text of the voice signal is obtained;
First mood determination unit 32 is determined for extracting the keyword in the text according to the keyword extractedThe first mood of client;
Second mood acquiring unit 33 is obtained for carrying out Emotion identification to the voice signal based on neural network modelObtain corresponding second mood of the voice signal;
Mood determination unit 34, for determining institute's predicate according to first mood, the second mood and default ruleThe corresponding mood of sound signal;
Art matching unit 35 is talked about, for talking about art from the reply data library lookup answer corresponding with the text pre-establishedAnd/or the corresponding relationship pacified words art, text is preserved in the reply data library, answers words art corresponding with the moodAnd/or mood, the corresponding relationship for pacifying words art.
Art coalignment if the embodiment of the present application carries out speech recognition and/or mood by the voice signal to clientIdentification provides to contact staff and answers words art and/or pacify words art, contact staff is avoided to lead to confusing communication due to lacking experienceThe case where, improve service quality.
Voice recognition unit 31 is further used for:
Acoustic model is constructed, wherein the acoustic model includes phoneme training pattern and the mixing based on memory unit connectionNeural network model;
The acoustic feature is input to the acoustic model by the acoustic feature for extracting the voice signal;
Phoneme recognition is carried out to the acoustic feature by the phoneme training pattern of trained completion, obtains phoneme recognitionAs a result;
Text region is carried out by the hybrid production style based on memory unit connection of trained completion, is obtainedText corresponding with the voice signal.
Optionally, the acoustic feature for extracting the voice signal, comprising:
Fourier transformation is carried out to the voice signal, the voice signal of time domain is converted to the energy spectrum of frequency domain;
The energy spectrum is inputted into triangular filter group, obtains the logarithmic energy of the triangular filter group output;
The acoustic feature that discrete cosine transform obtains the voice signal is carried out to the logarithmic energy.
Second mood acquiring unit 33 is further used for:
Multiple trained audios are obtained, the first acoustic feature vector sum first sample entropy feature of the trained audio is extracted,First sample entropy feature described in the first acoustic feature vector sum of each trained audio is merged respectively, is obtainedFirst emotion sound spectrum vector of each trained audio;
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector, obtains the second emotion sound spectrum vector;
The second emotion sound spectrum vector of the corresponding trained audio of various moods is inputted into nerve net respectivelyNetwork model is trained, and obtains the corresponding sound spectrum vector mood model of various moods and trained template library is added;
The the second acoustics feature vector and the second Sample Entropy feature for extracting the voice signal, by second acoustic featureSecond Sample Entropy feature described in vector sum is merged, and the third emotion sound spectrum vector of the voice signal is obtained, byThree emotion sound spectrum vectors are compared and calculate with each sound spectrum vector mood model in the trained template libraryMood model matching degree exports the corresponding mood of maximum mood model matching degree.
Optionally, described that dimension-reduction treatment is carried out to the first emotion sound spectrum vector, it is special to obtain the second emotion sound spectrumLevy vector, comprising:
Dimension-reduction treatment is carried out to the first emotion sound spectrum vector using principal component analysis PCA algorithm, obtains secondEmotion sound spectrum vector.
Optionally, the neural network model is backpropagation BP neural network model.
Optionally, described device further include:
Emotional curve generation unit carries out Emotion identification for the voice signal to client and customer service dialogue overall process, rawAt emotional curve;
Service satisfaction determination unit determines service satisfaction according to the emotional curve.
It should be noted that other of each functional unit involved by a kind of words art coalignment provided by the embodiments of the present applicationCorresponding description, can be with reference to the corresponding description in Fig. 1 and Fig. 2, and details are not described herein.
Based on above-mentioned method as depicted in figs. 1 and 2, correspondingly, the embodiment of the present application also provides a kind of storage medium,On be stored with computer program, the program realized when being executed by processor it is above-mentioned as depicted in figs. 1 and 2 if art matching process.
Based on this understanding, the technical solution of the application can be embodied in the form of software products, which producesProduct can store in a non-volatile memory medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructionsWith so that computer equipment (can be personal computer, server or the network equipment an etc.) execution the application is eachArt matching process described in implement scene.
It is above-mentioned in order to realize based on above-mentioned method as shown in Figure 1 and Figure 2 and virtual bench embodiment shown in Fig. 3Purpose, the embodiment of the present application also provides a kind of computer equipments, are specifically as follows personal computer, server, the network equipmentDeng the entity device includes storage medium and processor;Storage medium, for storing computer program;Processor, for executingComputer program is to realize above-mentioned art matching process as depicted in figs. 1 and 2.
Optionally, which can also include user interface, network interface, camera, radio frequency (RadioFrequency, RF) circuit, sensor, voicefrequency circuit, WI-FI module etc..User interface may include display screen(Display), input unit such as keyboard (Keyboard) etc., optional user interface can also connect including USB interface, card readerMouthful etc..Network interface optionally may include standard wireline interface and wireless interface (such as blue tooth interface, WI-FI interface).
It will be understood by those skilled in the art that a kind of computer equipment structure provided by the embodiments of the present application is not constituted pairThe restriction of the entity device may include more or fewer components, perhaps combine certain components or different component clothIt sets.
It can also include operating system, network communication module in storage medium.Operating system is that management computer equipment is hardThe program of part and software resource supports the operation of message handling program and other softwares and/or program.Network communication module is usedCommunication between each component in realization storage medium inside, and communicated between other hardware and softwares in the entity device.
Through the above description of the embodiments, those skilled in the art can be understood that the application can borrowIt helps software that the mode of necessary general hardware platform is added to realize, hardware realization can also be passed through.Pass through the skill of application the applicationArt scheme carries out speech recognition and/or Emotion identification to the voice signal of client, provided to contact staff answer words art and/orWords art is pacified, avoids contact staff due to lacking the case where experience leads to confusing communication, improves service quality.Also, this ShenEmotion identification also please is carried out to the voice signal of client and customer service dialogue overall process, generates emotional curve, it is true according to emotional curveDetermine service satisfaction, can further promote customer service quality.
It will be appreciated by those skilled in the art that the accompanying drawings are only schematic diagrams of a preferred implementation scenario, module in attached drawing orProcess is not necessarily implemented necessary to the application.It will be appreciated by those skilled in the art that the mould in device in implement sceneBlock can according to implement scene describe be distributed in the device of implement scene, can also carry out corresponding change be located at be different fromIn one or more devices of this implement scene.The module of above-mentioned implement scene can be merged into a module, can also be into oneStep splits into multiple submodule.
Above-mentioned the application serial number is for illustration only, does not represent the superiority and inferiority of implement scene.Disclosed above is only the applicationSeveral specific implementation scenes, still, the application is not limited to this, and the changes that any person skilled in the art can think of is allThe protection scope of the application should be fallen into.

Claims (10)

CN201910045130.XA2019-01-172019-01-17 Vocabulary matching method and device, storage medium, and computer equipmentPendingCN109767765A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910045130.XACN109767765A (en)2019-01-172019-01-17 Vocabulary matching method and device, storage medium, and computer equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910045130.XACN109767765A (en)2019-01-172019-01-17 Vocabulary matching method and device, storage medium, and computer equipment

Publications (1)

Publication NumberPublication Date
CN109767765Atrue CN109767765A (en)2019-05-17

Family

ID=66452482

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910045130.XAPendingCN109767765A (en)2019-01-172019-01-17 Vocabulary matching method and device, storage medium, and computer equipment

Country Status (1)

CountryLink
CN (1)CN109767765A (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110265062A (en)*2019-06-132019-09-20上海指旺信息科技有限公司Collection method and device after intelligence based on mood detection is borrowed
CN110298682A (en)*2019-05-222019-10-01深圳壹账通智能科技有限公司Intelligent Decision-making Method, device, equipment and medium based on user information analysis
CN110364183A (en)*2019-07-092019-10-22深圳壹账通智能科技有限公司Method, apparatus, computer equipment and the storage medium of voice quality inspection
CN110570879A (en)*2019-09-112019-12-13深圳壹账通智能科技有限公司Intelligent conversation method and device based on emotion recognition and computer equipment
CN110910903A (en)*2019-12-042020-03-24深圳前海微众银行股份有限公司Speech emotion recognition method, device, equipment and computer readable storage medium
CN110929005A (en)*2019-10-182020-03-27平安科技(深圳)有限公司 Task follow-up method, device, device and storage medium based on sentiment analysis
CN111128241A (en)*2019-12-302020-05-08上海浩琨信息科技有限公司Intelligent quality inspection method and system for voice call
CN111143529A (en)*2019-12-242020-05-12北京赤金智娱科技有限公司 A method and device for dialogue with a dialogue robot
CN111161733A (en)*2019-12-312020-05-15中国银行股份有限公司Control method and device for intelligent voice service
CN111178982A (en)*2020-01-022020-05-19珠海格力电器股份有限公司Customer satisfaction analysis method, storage medium and computer device
CN111210843A (en)*2019-12-312020-05-29秒针信息技术有限公司Method and device for recommending dialect
CN111354377A (en)*2019-06-272020-06-30深圳市鸿合创新信息技术有限责任公司Method and device for recognizing emotion through voice and electronic equipment
CN111598485A (en)*2020-05-282020-08-28成都晓多科技有限公司Multi-dimensional intelligent quality inspection method, device, terminal equipment and medium
CN111696558A (en)*2020-06-242020-09-22深圳壹账通智能科技有限公司Intelligent outbound method, device, computer equipment and storage medium
CN111696556A (en)*2020-07-132020-09-22上海茂声智能科技有限公司Method, system, equipment and storage medium for analyzing user conversation emotion
CN111694938A (en)*2020-04-272020-09-22平安科技(深圳)有限公司Emotion recognition-based answering method and device, computer equipment and storage medium
CN111739559A (en)*2020-05-072020-10-02北京捷通华声科技股份有限公司Speech early warning method, device, equipment and storage medium
CN111833907A (en)*2020-01-082020-10-27北京嘀嘀无限科技发展有限公司Man-machine interaction method, terminal and computer readable storage medium
CN111832317A (en)*2020-07-092020-10-27平安普惠企业管理有限公司 Intelligent information diversion method, device, computer equipment and readable storage medium
CN111858897A (en)*2020-07-302020-10-30北京首汽智行科技有限公司Customer service staff speech guiding method and system
CN112860868A (en)*2021-03-092021-05-28上海华客信息科技有限公司Customer service telephone analysis method, system, equipment and storage medium
CN112860876A (en)*2021-03-312021-05-28中国工商银行股份有限公司Session auxiliary processing method and device
CN112860873A (en)*2021-03-232021-05-28北京小米移动软件有限公司Intelligent response method, device and storage medium
CN112885348A (en)*2021-01-252021-06-01广州中汇信息科技有限公司AI-combined intelligent voice electric marketing method
CN113094487A (en)*2021-05-062021-07-09中国银行股份有限公司Method and device for recommending dialect, electronic equipment and storage medium
CN113192498A (en)*2021-05-262021-07-30北京捷通华声科技股份有限公司Audio data processing method and device, processor and nonvolatile storage medium
CN113220849A (en)*2021-04-062021-08-06青岛日日顺乐信云科技有限公司Customer service staff emotion dispersion scheme searching method, electronic equipment and storage medium
CN113314150A (en)*2021-05-262021-08-27平安普惠企业管理有限公司Emotion recognition method and device based on voice data and storage medium
CN113342960A (en)*2021-07-072021-09-03上海华客信息科技有限公司Client appeal processing method, system, device and storage medium
CN113422876A (en)*2021-06-242021-09-21广西电网有限责任公司AI-based auxiliary management method, system and medium for power customer service center
CN113435999A (en)*2021-06-242021-09-24中国工商银行股份有限公司Service processing method, device and system
CN112017668B (en)*2020-10-302021-09-24北京淇瑀信息科技有限公司Intelligent voice conversation method, device and system based on real-time emotion detection
CN113886531A (en)*2021-10-282022-01-04中国平安人寿保险股份有限公司Intelligent question and answer determining method and device, computer equipment and storage medium
CN113990315A (en)*2021-10-222022-01-28南京联了么信息技术有限公司A intelligent audio amplifier for having suffer from cognitive disorder old person
CN114242109A (en)*2021-12-172022-03-25中国平安财产保险股份有限公司 Intelligent outbound call method, device, electronic device and medium based on emotion recognition
CN114299924A (en)*2021-12-242022-04-08北京声智科技有限公司Voice emotion-based conversational pushing method, device, equipment and storage medium
CN114969265A (en)*2022-06-102022-08-30中国银行股份有限公司Customer service conversation matching method and device
CN115101096A (en)*2022-06-162022-09-23平安银行股份有限公司 Speech processing method, apparatus, device and storage medium based on contextual intent
CN115238867A (en)*2022-07-282022-10-25广东电力信息科技有限公司 A power fault location method based on intelligent identification of customer service unstructured data
CN116259307A (en)*2023-01-032023-06-13乐融致新电子科技(天津)有限公司 Speech emotion recognition method and device
CN116303956A (en)*2023-03-022023-06-23招商银行股份有限公司 Method, device, terminal equipment, and medium for webpage telephony assistant prompting
CN117992597A (en)*2024-04-032024-05-07江苏微皓智能科技有限公司Information feedback method, device, computer equipment and computer storage medium
CN118779432A (en)*2024-07-242024-10-15国网新疆电力有限公司营销服务中心 A customer demand sentiment analysis method based on convolutional neural network
CN119211425A (en)*2024-11-282024-12-27福建博士通信息股份有限公司 Intelligent speech recognition method based on AI

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103811009A (en)*2014-03-132014-05-21华东理工大学Smart phone customer service system based on speech analysis
CN108053840A (en)*2017-12-292018-05-18广州势必可赢网络科技有限公司Emotion recognition method and system based on PCA-BP
CN108305642A (en)*2017-06-302018-07-20腾讯科技(深圳)有限公司The determination method and apparatus of emotion information
CN108564940A (en)*2018-03-202018-09-21平安科技(深圳)有限公司Audio recognition method, server and computer readable storage medium
CN109033257A (en)*2018-07-062018-12-18中国平安人寿保险股份有限公司Talk about art recommended method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103811009A (en)*2014-03-132014-05-21华东理工大学Smart phone customer service system based on speech analysis
CN108305642A (en)*2017-06-302018-07-20腾讯科技(深圳)有限公司The determination method and apparatus of emotion information
CN108053840A (en)*2017-12-292018-05-18广州势必可赢网络科技有限公司Emotion recognition method and system based on PCA-BP
CN108564940A (en)*2018-03-202018-09-21平安科技(深圳)有限公司Audio recognition method, server and computer readable storage medium
CN109033257A (en)*2018-07-062018-12-18中国平安人寿保险股份有限公司Talk about art recommended method, device, computer equipment and storage medium

Cited By (57)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110298682A (en)*2019-05-222019-10-01深圳壹账通智能科技有限公司Intelligent Decision-making Method, device, equipment and medium based on user information analysis
CN110265062A (en)*2019-06-132019-09-20上海指旺信息科技有限公司Collection method and device after intelligence based on mood detection is borrowed
CN111354377A (en)*2019-06-272020-06-30深圳市鸿合创新信息技术有限责任公司Method and device for recognizing emotion through voice and electronic equipment
CN110364183A (en)*2019-07-092019-10-22深圳壹账通智能科技有限公司Method, apparatus, computer equipment and the storage medium of voice quality inspection
CN110570879A (en)*2019-09-112019-12-13深圳壹账通智能科技有限公司Intelligent conversation method and device based on emotion recognition and computer equipment
CN110929005A (en)*2019-10-182020-03-27平安科技(深圳)有限公司 Task follow-up method, device, device and storage medium based on sentiment analysis
CN110910903A (en)*2019-12-042020-03-24深圳前海微众银行股份有限公司Speech emotion recognition method, device, equipment and computer readable storage medium
CN110910903B (en)*2019-12-042023-03-21深圳前海微众银行股份有限公司Speech emotion recognition method, device, equipment and computer readable storage medium
CN111143529A (en)*2019-12-242020-05-12北京赤金智娱科技有限公司 A method and device for dialogue with a dialogue robot
CN111128241A (en)*2019-12-302020-05-08上海浩琨信息科技有限公司Intelligent quality inspection method and system for voice call
CN111210843A (en)*2019-12-312020-05-29秒针信息技术有限公司Method and device for recommending dialect
CN111161733A (en)*2019-12-312020-05-15中国银行股份有限公司Control method and device for intelligent voice service
CN111178982B (en)*2020-01-022023-07-21珠海格力电器股份有限公司Customer satisfaction analysis method, storage medium and computer device
CN111178982A (en)*2020-01-022020-05-19珠海格力电器股份有限公司Customer satisfaction analysis method, storage medium and computer device
CN111833907A (en)*2020-01-082020-10-27北京嘀嘀无限科技发展有限公司Man-machine interaction method, terminal and computer readable storage medium
CN111694938B (en)*2020-04-272024-05-14平安科技(深圳)有限公司Emotion recognition-based reply method and device, computer equipment and storage medium
CN111694938A (en)*2020-04-272020-09-22平安科技(深圳)有限公司Emotion recognition-based answering method and device, computer equipment and storage medium
CN111739559B (en)*2020-05-072023-02-28北京捷通华声科技股份有限公司Speech early warning method, device, equipment and storage medium
CN111739559A (en)*2020-05-072020-10-02北京捷通华声科技股份有限公司Speech early warning method, device, equipment and storage medium
CN111598485A (en)*2020-05-282020-08-28成都晓多科技有限公司Multi-dimensional intelligent quality inspection method, device, terminal equipment and medium
CN111696558A (en)*2020-06-242020-09-22深圳壹账通智能科技有限公司Intelligent outbound method, device, computer equipment and storage medium
CN111832317A (en)*2020-07-092020-10-27平安普惠企业管理有限公司 Intelligent information diversion method, device, computer equipment and readable storage medium
CN111832317B (en)*2020-07-092023-08-18广州市炎华网络科技有限公司 Intelligent information diversion method, device, computer equipment and readable storage medium
CN111696556B (en)*2020-07-132023-05-16上海茂声智能科技有限公司 A method, system, device and storage medium for analyzing user dialogue sentiment
CN111696556A (en)*2020-07-132020-09-22上海茂声智能科技有限公司Method, system, equipment and storage medium for analyzing user conversation emotion
CN111858897A (en)*2020-07-302020-10-30北京首汽智行科技有限公司Customer service staff speech guiding method and system
CN112017668B (en)*2020-10-302021-09-24北京淇瑀信息科技有限公司Intelligent voice conversation method, device and system based on real-time emotion detection
CN112885348B (en)*2021-01-252024-03-08广州中汇信息科技有限公司AI-combined intelligent voice electric marketing method
CN112885348A (en)*2021-01-252021-06-01广州中汇信息科技有限公司AI-combined intelligent voice electric marketing method
CN112860868A (en)*2021-03-092021-05-28上海华客信息科技有限公司Customer service telephone analysis method, system, equipment and storage medium
CN112860873B (en)*2021-03-232024-03-05北京小米移动软件有限公司 Intelligent response method, device and storage medium
CN112860873A (en)*2021-03-232021-05-28北京小米移动软件有限公司Intelligent response method, device and storage medium
CN112860876A (en)*2021-03-312021-05-28中国工商银行股份有限公司Session auxiliary processing method and device
CN113220849A (en)*2021-04-062021-08-06青岛日日顺乐信云科技有限公司Customer service staff emotion dispersion scheme searching method, electronic equipment and storage medium
CN113094487A (en)*2021-05-062021-07-09中国银行股份有限公司Method and device for recommending dialect, electronic equipment and storage medium
CN113192498A (en)*2021-05-262021-07-30北京捷通华声科技股份有限公司Audio data processing method and device, processor and nonvolatile storage medium
CN113314150A (en)*2021-05-262021-08-27平安普惠企业管理有限公司Emotion recognition method and device based on voice data and storage medium
CN113422876B (en)*2021-06-242022-05-10广西电网有限责任公司 AI-based power customer service center auxiliary management method, system and medium
CN113435999A (en)*2021-06-242021-09-24中国工商银行股份有限公司Service processing method, device and system
CN113422876A (en)*2021-06-242021-09-21广西电网有限责任公司AI-based auxiliary management method, system and medium for power customer service center
CN113342960A (en)*2021-07-072021-09-03上海华客信息科技有限公司Client appeal processing method, system, device and storage medium
CN113990315A (en)*2021-10-222022-01-28南京联了么信息技术有限公司A intelligent audio amplifier for having suffer from cognitive disorder old person
CN113886531B (en)*2021-10-282024-08-02中国平安人寿保险股份有限公司Intelligent question-answer operation determining method, device, computer equipment and storage medium
CN113886531A (en)*2021-10-282022-01-04中国平安人寿保险股份有限公司Intelligent question and answer determining method and device, computer equipment and storage medium
CN114242109A (en)*2021-12-172022-03-25中国平安财产保险股份有限公司 Intelligent outbound call method, device, electronic device and medium based on emotion recognition
CN114299924A (en)*2021-12-242022-04-08北京声智科技有限公司Voice emotion-based conversational pushing method, device, equipment and storage medium
CN114299924B (en)*2021-12-242025-04-22北京声智科技有限公司 Method, device, equipment and storage medium for pushing speech based on voice emotion
CN114969265A (en)*2022-06-102022-08-30中国银行股份有限公司Customer service conversation matching method and device
CN115101096A (en)*2022-06-162022-09-23平安银行股份有限公司 Speech processing method, apparatus, device and storage medium based on contextual intent
CN115238867A (en)*2022-07-282022-10-25广东电力信息科技有限公司 A power fault location method based on intelligent identification of customer service unstructured data
CN116259307A (en)*2023-01-032023-06-13乐融致新电子科技(天津)有限公司 Speech emotion recognition method and device
CN116303956A (en)*2023-03-022023-06-23招商银行股份有限公司 Method, device, terminal equipment, and medium for webpage telephony assistant prompting
CN117992597A (en)*2024-04-032024-05-07江苏微皓智能科技有限公司Information feedback method, device, computer equipment and computer storage medium
CN117992597B (en)*2024-04-032024-06-07江苏微皓智能科技有限公司Information feedback method, device, computer equipment and computer storage medium
CN118779432A (en)*2024-07-242024-10-15国网新疆电力有限公司营销服务中心 A customer demand sentiment analysis method based on convolutional neural network
CN119211425A (en)*2024-11-282024-12-27福建博士通信息股份有限公司 Intelligent speech recognition method based on AI
CN119211425B (en)*2024-11-282025-01-28福建博士通信息股份有限公司 Intelligent speech recognition method based on AI

Similar Documents

PublicationPublication DateTitle
CN109767765A (en) Vocabulary matching method and device, storage medium, and computer equipment
CN112001628B (en)Recommendation method of intelligent interview video
CN107481720B (en)Explicit voiceprint recognition method and device
CN107818798A (en)Customer service quality evaluating method, device, equipment and storage medium
WO2020253128A1 (en)Voice recognition-based communication service method, apparatus, computer device, and storage medium
CN111081280A (en)Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method
CN114138960A (en)User intention identification method, device, equipment and medium
CN115222857A (en)Method, apparatus, electronic device and computer readable medium for generating avatar
US20250259628A1 (en)System method and apparatus for combining words and behaviors
CN113160819A (en)Method, apparatus, device, medium and product for outputting animation
Jia et al.A deep learning system for sentiment analysis of service calls
Shah et al.Speech emotion recognition based on SVM using MATLAB
US12347416B2 (en)Systems and methods to automate trust delivery
WO2024114303A1 (en)Phoneme recognition method and apparatus, electronic device and storage medium
Chang et al.Machine Learning and Consumer Data
CN114822597B (en)Speech emotion recognition method and device, processor and electronic equipment
Gomes et al.Person identification based on voice recognition
CN116469420A (en) Speech emotion recognition method, device, equipment and medium
Ramanarayanan et al.Using vision and speech features for automated prediction of performance metrics in multimodal dialogs
CN114333844A (en) Voiceprint recognition method, device, medium and equipment
CN116486789A (en)Speech recognition model generation method, speech recognition method, device and equipment
Peng et al.Toward predicting communication effectiveness
CN114203183B (en) User attribute identification method, device, electronic device and computer readable medium
Singh et al.Beyond Textual Analysis: Framework for CSAT Score Prediction with Speech and Text Emotion Features
CN114664305B (en) Information push method, device, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20190517


[8]ページ先頭

©2009-2025 Movatter.jp