Movatterモバイル変換


[0]ホーム

URL:


CN108766127A - Sign language exchange method, unit and storage medium - Google Patents

Sign language exchange method, unit and storage medium
Download PDF

Info

Publication number
CN108766127A
CN108766127ACN201810552444.4ACN201810552444ACN108766127ACN 108766127 ACN108766127 ACN 108766127ACN 201810552444 ACN201810552444 ACN 201810552444ACN 108766127 ACN108766127 ACN 108766127A
Authority
CN
China
Prior art keywords
sign language
reply
action
language action
semantic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810552444.4A
Other languages
Chinese (zh)
Inventor
邹祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co LtdfiledCriticalBOE Technology Group Co Ltd
Priority to CN201810552444.4ApriorityCriticalpatent/CN108766127A/en
Publication of CN108766127ApublicationCriticalpatent/CN108766127A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本公开提供了一种手语交互方法、设备、装置和存储介质,所述方法包括:接收第一用户的第一手语动作;基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;输出所述答复手语动作,其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。所述方法还包括:检测第二用户根据所述答复手语动作做出的第二手语动作;在确定第二手语动作与答复手语动作不一致的情况下,输出不一致的答复手语动作。

The present disclosure provides a sign language interaction method, device, device and storage medium. The method includes: receiving a first sign language action of a first user; based on at least one of a sign language communication database and input information of a second user and the The first sign language action generates a reply sign language action; output the reply sign language action, wherein the sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library. The method further includes: detecting a second sign language action made by the second user according to the answering sign language action; and outputting an inconsistent answering sign language action when it is determined that the second sign language action is inconsistent with the answering sign language action.

Description

Translated fromChinese
手语交互方法、设备、装置和存储介质Sign language interaction method, device, device and storage medium

技术领域technical field

本公开涉及智能手语交互领域,具体的涉及一种手语交互方法、设备、装置和存储介质。The present disclosure relates to the field of smart sign language interaction, in particular to a sign language interaction method, device, device and storage medium.

背景技术Background technique

目前,对于聋哑人,手语是他们表达思想、进行交际的主要手段,通过手语动作来实现与其他人之间的交流沟通。但是,对于普通人来说,由于手语动作并不是其进行交流沟通的必要手段,因此,仅有一小部分的专业人员学习了手语动作。这就在客观上限制了聋哑人可交流的对象的范围。At present, for the deaf and dumb, sign language is the main means for them to express their thoughts and communicate, and communicate with other people through sign language movements. But, for common people, because sign language action is not its necessary means of communication, therefore, only a small number of professionals have learned sign language action. This objectively limits the range of objects that the deaf-mute can communicate with.

发明内容Contents of the invention

根据本公开的一方面,提供了一种手语交互方法,包括:接收第一用户的第一手语动作;基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;输出所述答复手语动作,其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。According to an aspect of the present disclosure, a sign language interaction method is provided, including: receiving a first sign language action of a first user; based on at least one of the sign language communication database and the input information of the second user and the first sign language action Generate a reply sign language action; output the reply sign language action, wherein the sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library.

根据本公开实施例,其中,生成答复手语动作包括:将所述第一手语动作与手语动作交流库中的手语动作交流样本进行匹配;利用匹配的手语动作交流样本获得答复手语动作。According to an embodiment of the present disclosure, generating a reply sign language action includes: matching the first sign language action with a sign language action communication sample in a sign language action communication database; and obtaining a reply sign language action by using the matched sign language action communication sample.

根据本公开实施例,所述智能手语交互方法还包括:基于所述第一手语动作生成第一语义信息;基于所述答复手语动作生成答复语义信息;输出所述第一语义信息和答复语义信息。According to an embodiment of the present disclosure, the intelligent sign language interaction method further includes: generating first semantic information based on the first sign language action; generating answer semantic information based on the reply sign language action; outputting the first semantic information and answer semantic information .

根据本公开实施例,其中,生成答复手语动作包括:基于所述第一手语动作生成第一语义信息;将所述第一语义信息与手语语义交流库中的手语语义交流样本进行匹配;利用匹配的手语语义交流样本生成答复手语动作。According to an embodiment of the present disclosure, generating a reply sign language action includes: generating first semantic information based on the first sign language action; matching the first semantic information with sign language semantic communication samples in the sign language semantic communication database; using the matching A sample of sign language semantic communication generates responses to sign language movements.

根据本公开实施例,所述智能手语交互方法还包括:基于所述手语语义交流样本获得答复语义信息;输出所述第一语义信息和答复语义信息。According to an embodiment of the present disclosure, the intelligent sign language interaction method further includes: obtaining reply semantic information based on the sign language semantic communication sample; and outputting the first semantic information and reply semantic information.

根据本公开实施例,其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述答复语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。According to an embodiment of the present disclosure, wherein the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the reply semantic information includes text and/or audio having the same meaning as the answering sign language action and/or audio.

根据本公开实施例,其中,生成答复手语动作包括:基于所述第一手语动作生成第一语义信息;输出所述第一语义信息;接收第二用户输入的第二语义信息;基于所述第二语义信息生成答复手语动作。According to an embodiment of the present disclosure, generating a reply sign language action includes: generating first semantic information based on the first sign language action; outputting the first semantic information; receiving second semantic information input by a second user; Two semantic information generates responses to sign language actions.

根据本公开实施例,其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述第二语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。According to an embodiment of the present disclosure, wherein the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the second semantic information includes text having the same meaning as the answering sign language action and/or audio.

根据本公开实施例,所述手语交互方法还包括:检测第二用户根据所述答复手语动作做出的第二手语动作;确定所述第二手语动作与答复手语动作是否一致,在第二手语动作与答复手语动作不一致的情况下,输出提示信息。According to an embodiment of the present disclosure, the sign language interaction method further includes: detecting a second sign language action made by the second user according to the reply sign language action; determining whether the second sign language action is consistent with the reply sign language action, and at the In the case that the second-hand language action is inconsistent with the answering sign language action, a prompt message is output.

根据本公开实施例,其中,所述确定第二手语动作与答复手语动作不一致包括:基于所述第二手语动作生成第二语义信息;基于所述答复手语动作生成答复语义信息;基于所述第二语义信息和答复语义信息,确定所述第二手语动作与答复手语动作是否一致。According to an embodiment of the present disclosure, the determining that the second sign language action is inconsistent with the reply sign language action includes: generating second semantic information based on the second sign language action; generating reply semantic information based on the reply sign language action; The second semantic information and the reply semantic information are used to determine whether the second sign language action is consistent with the reply sign language action.

根据本公开的另一方面,还提供了一种手语交互设备,包括:图像采集模块,用于接收第一用户的第一手语动作;处理模块,用于基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;输出模块,用于输出所述答复手语动作,其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。According to another aspect of the present disclosure, there is also provided a sign language interaction device, including: an image acquisition module, used to receive the first sign language action of the first user; a processing module, used for input based on the sign language communication library and the second user At least one of the information and the first sign language action generate a reply sign language action; an output module is used to output the reply sign language action, wherein the sign language communication library includes at least one of the sign language action communication library and the sign language semantic communication library A sort of.

根据本公开实施例,其中,所述处理模块生成答复手语动作包括:将所述第一手语动作与手语动作交流库中的手语动作交流样本进行匹配,利用匹配的手语动作交流样本获得答复手语动作;所述处理模块还用于基于所述第一手语动作生成第一语义信息,基于所述答复手语动作生成答复语义信息;所述输出模块还用于输出所述第一语义信息和答复语义信息。According to an embodiment of the present disclosure, wherein the processing module generating the reply sign language action includes: matching the first sign language action with the sign language action communication samples in the sign language action communication database, and obtaining the reply sign language action by using the matched sign language action communication samples The processing module is also used to generate first semantic information based on the first sign language action, and to generate reply semantic information based on the reply sign language action; the output module is also used to output the first semantic information and reply semantic information .

根据本公开实施例,其中,所述处理模块生成答复手语动作包括:基于所述第一手语动作生成第一语义信息,将所述第一语义信息与手语语义交流库中的手语语义交流样本进行匹配;所述处理模块还用于基于匹配的手语语义交流样本生成答复手语动作和获得答复语义信息;所述输出模块还用于输出所述第一语义信息和答复语义信息。According to an embodiment of the present disclosure, wherein the processing module generating the reply sign language action includes: generating first semantic information based on the first sign language action, and comparing the first semantic information with the sign language semantic communication samples in the sign language semantic communication database Matching; the processing module is also used to generate a reply sign language action and obtain reply semantic information based on the matched sign language semantic communication sample; the output module is also used to output the first semantic information and reply semantic information.

根据本公开实施例,其中,所述处理模块还用于基于所述第一手语动作生成第一语义信息;所述输出模块还用于输出所述第一语义信息;所述设备还包括输入模块,用于接收第二用户输入的第二语义信息;所述处理模块生成答复手语动作包括:基于所述第二语义信息生成答复手语动作。According to an embodiment of the present disclosure, wherein, the processing module is further configured to generate first semantic information based on the first sign language action; the output module is further configured to output the first semantic information; the device further includes an input module , for receiving second semantic information input by a second user; the processing module generating a reply sign language action includes: generating a reply sign language action based on the second semantic information.

根据本公开实施例,所述手语交互设备还包括检测模块,用于检测第二用户根据所述答复手语动作做出的第二手语动作;所述处理模块还用于确定所述第二手语动作与答复手语动作是否一致,在确定所述第二手语动作与答复手语动作不一致的情况下,输出提示信息。According to an embodiment of the present disclosure, the sign language interaction device further includes a detection module, configured to detect a second sign language action made by the second user according to the reply sign language action; the processing module is also configured to determine the second sign language action Whether the second sign language action is consistent with the answering sign language action, if it is determined that the second sign language action is inconsistent with the answering sign language action, output prompt information.

根据本公开实施例,其中,所述检测模块包括图像采集单元、手环、加速度传感器和陀螺仪中的至少一种;所述输出提示信息包括发出提示音、做出震动、控制输出模块突出显示不一致的答复手语动作中的至少一种。According to an embodiment of the present disclosure, wherein, the detection module includes at least one of an image acquisition unit, a wristband, an acceleration sensor, and a gyroscope; the output prompt information includes making a prompt sound, making a vibration, and controlling the output module to highlight Inconsistent responses to at least one of the gestures in sign language.

根据本公开的又一方面,还提供了一种手语交互装置,包括:至少一个处理器;和至少一个存储器,其中,所述存储器存储有计算机可读代码,所述计算机可读代码当由所述至少一个处理器运行时执行如上所述的手语交互方法,或实现如上所述的手语交互设备。According to still another aspect of the present disclosure, there is also provided a sign language interaction device, including: at least one processor; and at least one memory, wherein the memory stores computer-readable codes, and the computer-readable codes are used by the When the at least one processor is running, execute the above-mentioned sign language interaction method, or realize the above-mentioned sign language interaction device.

根据本公开的又一方面,还提供了一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如上所述的手语交互方法。According to yet another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium having stored therein computer-readable code that, when executed by one or more processors, executes the sign language interaction method.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1示出了根据本公开实施例的手语交互方法的流程图;FIG. 1 shows a flowchart of a sign language interaction method according to an embodiment of the present disclosure;

图2示出了根据本公开实施例的利用手语动作交流库和第一手语动作获得答复手语动作的流程图;FIG. 2 shows a flow chart of obtaining a reply sign language action using a sign language action communication library and a first sign language action according to an embodiment of the present disclosure;

图3示出了根据如图2所示的手语交互方法的输出语义信息的流程图;Fig. 3 shows the flow chart of the output semantic information according to the sign language interaction method as shown in Fig. 2;

图4示出了根据本公开实施例的利用手语语义交流库和第一手语动作生成答复手语动作的流程图;FIG. 4 shows a flow chart of generating a reply sign language gesture using a sign language semantic communication library and a first sign language gesture according to an embodiment of the present disclosure;

图5示出了根据如图4所示的手语交互方法的输出语义信息的流程图;Fig. 5 shows the flowchart of the output semantic information according to the sign language interaction method as shown in Fig. 4;

图6示出了根据本公开实施例的基于第二用户的输入信息和第一手语动作来生成答复手语动作的流程图;Fig. 6 shows a flow chart of generating a reply sign language action based on the second user's input information and the first sign language action according to an embodiment of the present disclosure;

图7示出了根据本公开实施例的检测第二手语动作的流程图;Fig. 7 shows a flow chart of detecting a second sign language action according to an embodiment of the present disclosure;

图8示出了根据本公开实施例的手语交互设备的示意性框图;Fig. 8 shows a schematic block diagram of a sign language interaction device according to an embodiment of the present disclosure;

图9示出了根据本公开实施例的检测模块的示意图;Fig. 9 shows a schematic diagram of a detection module according to an embodiment of the present disclosure;

图10示出了根据本公开实施例的手语交互设备的结构示意图;Fig. 10 shows a schematic structural diagram of a sign language interaction device according to an embodiment of the present disclosure;

图11示出了根据本公开实施例的手语交互装置的示意性框图。Fig. 11 shows a schematic block diagram of a sign language interaction device according to an embodiment of the present disclosure.

具体实施方式Detailed ways

下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. Apparently, the described embodiments are only some of the embodiments of the present disclosure, not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.

本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。"First", "second" and similar words used in the present disclosure do not indicate any order, quantity or importance, but are only used to distinguish different components. Likewise, "comprising" or "comprises" and similar words mean that the elements or items appearing before the word include the elements or items listed after the word and their equivalents, and do not exclude other elements or items. Words such as "connected" or "connected" are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.

本申请中使用了流程图用来说明根据本申请的实施例的方法的步骤。应当理解的是,前面或后面的步骤不一定按照顺序来精确的进行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步。The flow chart is used in the present application to illustrate the steps of the method according to the embodiment of the present application. It should be understood that the preceding or subsequent steps do not necessarily have to be performed in a precise order. Instead, various steps may be processed in reverse order or concurrently. At the same time, other operations can be added to these procedures, or a step or steps can be removed from these procedures.

相关技术中手语交互装置通过检测聋哑人做出的手语动作来将其翻译转换为文字信息,再播放转换得到的文字信息。仅是从聋哑人的角度去识别手语动作,翻译手语动作并语音播报。In the related art, the sign language interaction device detects the sign language actions made by the deaf-mute, translates them into text information, and then plays the converted text information. Only from the perspective of the deaf-mute to identify sign language movements, translate sign language movements and voice broadcast.

为了实现聋哑人与不会手语动作的人的正常交流,提出了一种手语语音转换装置,当手语者做出手语动作时,其可以检测手语者做出手语动作,并将检测的手语动作转换成语音,播放给非手语者;当非手语者发出语音时,其将接收的语音转换成文字,并将转换的文字显示给手语者,从而实现手语者与非手语者之间的交流。In order to realize the normal communication between the deaf-mute and those who cannot sign language movements, a sign language speech conversion device is proposed. Convert it into voice and play it to non-sign language speakers; when non-sign language speakers make voices, it converts the received voice into text and displays the converted text to sign language speakers, thereby realizing communication between sign language speakers and non-sign language speakers.

然而,上述手语语音转换装置仅能实现手语者与非手语者之间的文字/语音交流,而无法实现手语者与非手语者之间的直接地手语交流。因此,十分需要一种能实现手语者与非手语者直接进行手语沟通的手语交互方法和设备。However, the above-mentioned sign language speech conversion device can only realize the text/voice communication between the sign language person and the non-sign language person, but cannot realize the direct sign language communication between the sign language person and the non-sign language person. Therefore, there is a great need for a sign language interaction method and device that can realize sign language communication between a sign language user and a non-sign language user.

本公开提出一种手语交互方法,图1示出了根据本公开实施例的手语交互方法的流程图。首先,在步骤S110,接收第一用户的第一手语动作。例如,所述第一用户可以是手语者,所述第一手语动作为手语者做出的想要进行沟通的内容的手语动作。所述接收可以通过图像采集单元实现,例如,可以通过摄像头采集手语者做出的手语动作。The present disclosure proposes a sign language interaction method, and FIG. 1 shows a flow chart of the sign language interaction method according to an embodiment of the present disclosure. First, in step S110, a first sign language action of a first user is received. For example, the first user may be a sign language user, and the first sign language action is a sign language action performed by the sign language user to communicate content. The receiving can be realized through an image acquisition unit, for example, a camera can be used to capture the sign language actions made by the sign language user.

接着,在步骤S120,基于手语交流库和第二用户的输入信息中的至少一种以及接收的第一手语动作生成答复手语动作。所述答复手语动作与第一手语动作具有语义上的关联性。例如,当第一手语动作表达的语义为“明天天气如何?”时,所述答复手语动作的语义可以是“天气预报表示是晴天!”。换句话说,生成的答复手语动作是用于答复第一手语动作的。其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。所述手语动作交流库包括手语动作交流样本,所述手语语义交流库包括手语语义交流样本。Next, in step S120, a reply sign language action is generated based on at least one of the sign language communication database and the input information of the second user and the received first sign language action. The reply sign language action is semantically related to the first sign language action. For example, when the semantics expressed by the first sign language action is "what's the weather tomorrow?", the semantics of the reply sign language action may be "the weather forecast indicates that it will be sunny!". In other words, the generated answer sign language action is used to answer the first sign language action. Wherein, the sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library. The sign language action communication library includes sign language action communication samples, and the sign language semantic communication library includes sign language semantic communication samples.

最后,在步骤S130,输出生成的答复手语动作。所述输出可以通过显示屏实现,也可以通过诸如虚拟现实/增强现实设备实现。例如,可以通过非手语者佩戴的头戴显示设备显示所述答复手语动作。Finally, in step S130, output the generated reply sign language action. The output can be realized through a display screen, or through devices such as virtual reality/augmented reality. For example, the reply gesture in sign language may be displayed through a head-mounted display device worn by a non-sign language user.

根据本公开实施例,所述手语交互方法可以基于深度学习算法来实现。所述深度学习算法可以根据不同的手语交流样本进行训练学习,形成手语交流库。例如,可以基于对话的手语动作进行学习。此外,也可以利用所述深度学习算法对获取的第一手语动作进行手语语义分析。在生成对应于第一手语动作的答复手语动作之后,可以输出所述答复手语动作,例如,可以在增加现实(AR)眼镜,诸如,头戴显示设备上进行显示,也可以在显示屏上显示所述答复手语动作。According to an embodiment of the present disclosure, the sign language interaction method may be implemented based on a deep learning algorithm. The deep learning algorithm can be trained and learned according to different sign language communication samples to form a sign language communication library. For example, learning can be based on conversational gestures in sign language. In addition, the deep learning algorithm may also be used to perform sign language semantic analysis on the acquired first sign language action. After generating a reply sign language gesture corresponding to the first sign language gesture, the reply sign language gesture may be output, for example, displayed on augmented reality (AR) glasses, such as a head-mounted display device, or displayed on a display screen The reply sign language action.

根据本公开的一个实施例,上述步骤S120中的生成答复手语动作可以基于手语动作交流库和第一手语动作实现。图2示出了利用手语动作交流库和第一手语动作生成答复手语动作的流程图。According to an embodiment of the present disclosure, the sign language action of generating a reply in the above step S120 may be implemented based on the sign language action communication database and the first sign language action. Fig. 2 shows a flow chart of generating a reply sign language action by using the sign language action communication library and the first sign language action.

如图2所示,首先,在步骤S121,将第一手语动作与手语动作交流库中的手语动作交流样本进行匹配。所述手语动作交流库中包括用于交流的手语动作交流样本,可以通过将现有的手语语义交流库中的语义信息翻译成手语交流动作获得,也可以通过直接建立获得。例如,在所述手语动作交流库中,表示“如何到达附近的商场?”的手语动作交流样本与诸如表示“先左转然后直行200米”的手语动作交流样本相对应。当接收到第一用户做出第一手语动作之后,将其与手语动作交流库中的手语动作交流样本进行匹配,以得到与第一手语动作具有语义关联性的手语动作交流样本。As shown in FIG. 2 , first, in step S121 , the first sign language action is matched with the sign language action communication samples in the sign language action communication database. The sign language action communication database includes sign language action communication samples for communication, which can be obtained by translating the semantic information in the existing sign language semantic communication database into sign language communication actions, or by directly establishing it. For example, in the sign language action communication library, a sign language action communication sample representing "how to get to a nearby shopping mall?" corresponds to a sign language action communication sample such as "turn left first and then go straight for 200 meters". After receiving the first sign language action made by the first user, it is matched with the sign language action communication samples in the sign language action communication database, so as to obtain the sign language action communication samples having semantic correlation with the first sign language action.

接着,在步骤S122,利用匹配的手语动作交流样本获得答复手语动作,例如,可以通过将匹配的手语动作交流样本进行整合以得到答复手语动作。例如,可以根据表示“先左转然后直行200米”的手语动作交流样本来得到表示“你可以在前面的路口左转,然后直行200米即可到达商场”的手语动作来作为答复手语动作。或者,在匹配的手语动作交流样本的语义比较简单的情况下,可以直接将匹配的手语动作交流样本作为答复手语动作。例如,第一手语动作的语义为“谢谢”,匹配的手语动作交流样本可以是“不客气”,此时可以直接将手语动作交流样本作为答复手语动作。Next, in step S122 , use the matched sign language action communication samples to obtain the reply sign language action, for example, the answer sign language action can be obtained by integrating the matched sign language action communication samples. For example, a sign language action indicating "you can turn left at the intersection in front and then go straight 200 meters to reach the shopping mall" can be obtained as a reply sign language action based on the sign language action exchange sample indicating "turn left first and then go straight for 200 meters". Alternatively, when the semantics of the matched sign language action communication sample is relatively simple, the matched sign language action communication sample may be directly used as the reply sign language action. For example, the semantics of the first sign language action is "thank you", and the matching sign language action communication sample can be "you're welcome", at this time, the sign language action communication sample can be directly used as the reply sign language action.

通过以上手语交互方法,可以利用建立的手语动作交流库获得与手语者的第一手语动作相对应的答复手语动作。在如图2所示的实施例中,答复手语动作是直接根据第一手语动作获得的,而作为非手语者的第二用户并不知晓其中的含义。因此,根据如图2所述的手语交互方法还可以包括输出第一手语动作和答复手语动作的语义信息。Through the above sign language interaction method, the established sign language action exchange database can be used to obtain the reply sign language action corresponding to the sign language user's first sign language action. In the embodiment shown in FIG. 2 , the reply sign language action is directly obtained according to the first sign language action, and the second user who is not a sign language user does not know the meaning therein. Therefore, the sign language interaction method as shown in FIG. 2 may further include outputting semantic information of the first sign language action and the reply sign language action.

图3示出了根据如图2所示的手语交互方法的输出语义信息的流程图,首先,在步骤S310,基于第一手语动作生成第一语义信息;接着,在步骤S320,基于答复手语动作生成答复语义信息。此处的基于手语动作生成语义信息可以通过手语转换模块来实现,例如,可以将手语动作翻译成对应的文字或者语音。需要注意的是,步骤S310和步骤S320之间不具有执行上的先后顺序。Fig. 3 shows the flow chart of the output semantic information according to the sign language interaction method as shown in Fig. 2, at first, in step S310, generate the first semantic information based on the first sign language action; then, in step S320, based on the reply sign language action Generate reply semantic information. The generation of semantic information based on sign language actions here can be realized through a sign language conversion module, for example, sign language actions can be translated into corresponding text or voice. It should be noted that there is no execution sequence between step S310 and step S320.

最后,在步骤S330,输出生成的第一语义信息和答复语义信息。其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述答复语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。所述输出可以通过显示屏显示作为文本的语义信息来实现,也可以通过扬声器播放作为音频的语音信息来实现。作为非手语者的第二用户可以根据输出的内容了解到第一用户的第一手语动作的含义和获得的对应的答复手语动作的含义。Finally, in step S330, the generated first semantic information and reply semantic information are output. Wherein, the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the answer semantic information includes text and/or audio having the same meaning as the answering sign language action. The output can be realized by displaying semantic information as text on a display screen, or by playing voice information as audio through a loudspeaker. The second user who is not a sign language speaker can understand the meaning of the first user's first sign language action and the meaning of the obtained corresponding reply sign language action according to the output content.

根据本公开的另一实施例,上述步骤S120中的生成答复手语动作也可以基于手语语义交流库和第一手语动作来实现。图4示出了利用手语语义交流库和第一手语动作生成答复手语动作的流程图。According to another embodiment of the present disclosure, the sign language action of generating a reply in the above step S120 may also be implemented based on the sign language semantic communication database and the first sign language action. Fig. 4 shows a flow chart of generating a reply sign language action by using the sign language semantic communication database and the first sign language action.

如图4所示,在步骤S123,基于第一手语动作生成第一语义信息,其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频。例如,可以利用手语转换模块将第一手语动作翻译成对应的文字或者语音。As shown in FIG. 4, in step S123, first semantic information is generated based on the first sign language action, wherein the first semantic information includes text and/or audio having the same meaning as the first sign language action. For example, the sign language conversion module can be used to translate the first sign language action into corresponding text or voice.

接着,在步骤S124,将第一语义信息与手语语义交流库中的手语语义交流样本进行匹配。其中,所述手语语义交流库中包括用于交流的手语语义交流样本。所述手语语义交流样本包括作为文字或者音频形式的语义信息。例如,在所述手语语义交流库中,表示“如何到达附近的商场?”的手语语义交流样本与诸如表示“先左转然后直行200米”的手语语义交流样本相对应。当将第一用户做出的第一手语动作转换成第一语义信息之后,将其与手语语义交流库中的手语语义交流样本进行匹配,以得到与第一语义信息具有语义关联性的手语语义交流样本。Next, in step S124, the first semantic information is matched with the sign language semantic communication samples in the sign language semantic communication database. Wherein, the sign language semantic communication database includes sign language semantic communication samples for communication. The sign language semantic communication samples include semantic information in text or audio form. For example, in the sign language semantic communication database, a sign language semantic communication sample indicating "how to get to a nearby shopping mall?" corresponds to a sign language semantic communication sample such as "turn left first and then go straight for 200 meters". After the first sign language action made by the first user is converted into the first semantic information, it is matched with the sign language semantic communication samples in the sign language semantic communication database to obtain the sign language semantic information that has semantic relevance to the first semantic information Exchange samples.

接着,在步骤S125,利用匹配的手语语义交流样本生成答复手语动作。例如,可以利用手语转换模块将匹配的手语语义交流样本翻译成手语动作以作为答复手语动作。Next, in step S125, use the matched sign language semantic communication samples to generate reply sign language actions. For example, the sign language conversion module can be used to translate the matched sign language semantic communication samples into sign language gestures as reply sign language gestures.

在如图4所示的实施例中,答复手语动作是通过根据第一手语动作转换的第一语义信息和手语语义交流库获得的,而作为非手语者的第二用户并不知晓其中的含义。因此,根据如图4所述的手语交互方法还可以包括输出第一手语动作和答复手语动作的语义信息。In the embodiment shown in Figure 4, the reply sign language action is obtained through the first semantic information converted according to the first sign language action and the sign language semantic communication database, and the second user who is not a sign language user does not know the meaning . Therefore, the sign language interaction method as shown in FIG. 4 may further include outputting semantic information of the first sign language action and the reply sign language action.

图5示出了根据如图4所示的手语交互方法的输出语义信息的流程图,在步骤S510,基于匹配的手语语义交流样本获得答复语义信息。类似的,此步骤也可以通过手语转换模块来实现。接着,在步骤S520,输出根据第一手语动作转换的第一语义信息和基于匹配的手语语义交流样本获得的答复语义信息。其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述答复语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。所述输出可以是通过显示屏显示文本,或者通过扬声器播放语音来实现。因此,第二用户可以根据步骤S520输出的信息了解到第一手语动作和答复手语动作的含义。FIG. 5 shows a flow chart of outputting semantic information according to the sign language interaction method shown in FIG. 4 . In step S510 , the answer semantic information is obtained based on the matched sign language semantic communication samples. Similarly, this step can also be realized through a sign language conversion module. Next, in step S520, the first semantic information converted according to the first sign language action and the reply semantic information obtained based on the matched sign language semantic communication samples are output. Wherein, the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the answer semantic information includes text and/or audio having the same meaning as the answering sign language action. The output can be realized by displaying text through a display screen, or playing voice through a speaker. Therefore, the second user can understand the meaning of the first sign language action and the reply sign language action according to the information output in step S520.

根据本公开的又一实施例,上述步骤S120中的生成答复手语动作也可以基于第二用户的输入信息和第一手语动作来实现。图6示出了根据本公开的基于第二用户的输入信息和第一手语动作来生成答复手语动作的流程图。According to yet another embodiment of the present disclosure, the sign language action of generating a reply in the above step S120 may also be implemented based on the input information of the second user and the first sign language action. Fig. 6 shows a flow chart of generating a reply sign language action based on the input information of the second user and the first sign language action according to the present disclosure.

首先,在步骤S126,基于第一手语动作生成第一语义信息,在步骤S127,输出生成的第一语义信息。所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述输出可以是通过显示屏显示文本,或者通过扬声器播放语音来实现。通过步骤S126和S127,作为非手语者的第二用户可以了解到第一用户的第一手语动作的含义,以用于做出相应的答复内容。First, in step S126, first semantic information is generated based on the first sign language action, and in step S127, the generated first semantic information is output. The first semantic information includes text and/or audio having the same meaning as the first sign language action, and the output can be realized by displaying text on a display screen or playing voice through a speaker. Through steps S126 and S127, the second user who is not a sign language speaker can understand the meaning of the first sign language action of the first user, so as to make a corresponding answer.

接着,在步骤S128,接收第二用户输入的第二语义信息。例如,可以通过麦克风接收第二用户发出的语音信息,或者,也可以通过输入接口接收第二用户键入的文本。所述语音和文本为第二用户基于第一手语动作做出的答复内容。例如,第一手语动作包含的第一语义信息为“请问你的名字是什么?”,第二用户输入的第二语义信息可以是“我叫张三”。Next, in step S128, the second semantic information input by the second user is received. For example, the voice information sent by the second user may be received through the microphone, or the text typed by the second user may also be received through the input interface. The voice and text are the reply content made by the second user based on the first sign language action. For example, the first semantic information contained in the first sign language action is "May I ask what your name is?", and the second semantic information input by the second user may be "My name is Zhang San".

接着,在步骤S129,基于所述第二语义信息生成答复手语动作。例如,可以利用手语转换模块将接收的第二语义信息“我叫张三”翻译成对应的手语动作来实现。Next, in step S129, a reply sign language action is generated based on the second semantic information. For example, the sign language conversion module can be used to translate the received second semantic information "my name is Zhang San" into corresponding sign language actions.

根据如图2、4或6所述的方法,可以获得对应于第一用户的第一手语动作的答复手语动作,并且,作为非手语者的第二用户也可以了解到第一手语动作和答复手语动作的含义。通过步骤S130输出答复手语动作,第二用户可以通过显示屏或者虚拟现实/增强现实设备等看到输出的答复手语动作,并模仿输出的答复手语动作做出相应的动作。换句话说,在根据本公开的手语交互方法中,手语者和非手语者可以直接地通过手语动作进行交流,并且非手语者也可以逐渐通过答复手语动作来学习手语,从而提高非手语者与手语者进行手语交流的水平。According to the method as shown in Figure 2, 4 or 6, the reply sign language action corresponding to the first sign language action of the first user can be obtained, and the second user as a non-sign language person can also understand the first sign language action and answer Meaning of sign language gestures. Through step S130 outputting the answering sign language action, the second user can see the output answering sign language action through the display screen or virtual reality/augmented reality device, etc., and imitate the output answering sign language action to make corresponding actions. In other words, in the sign language interaction method according to the present disclosure, the sign language person and the non-sign language person can communicate directly through sign language actions, and the non-sign language person can gradually learn sign language by replying to the sign language actions, thereby improving the communication between the non-sign language person and the sign language person. The level at which the signer communicates in sign language.

如图1所示的手语交互方法还包括检测第二用户根据输出的答复手语动作做出的第二手语动作是否准确,即第二手语动作的语义与答复手语动作的语义是否一致,从而保证非手语者能准确利用手语动作表达答复语义。The sign language interaction method as shown in Figure 1 also includes whether the second sign language action made by the second user according to the output answer sign language action is accurate, that is, whether the semantics of the second sign language action is consistent with the semantics of the answer sign language action, thereby Ensure that non-sign language users can accurately use sign language movements to express reply semantics.

图7示出了根据本公开实施例的检测第二手语动作的流程图,首先,在步骤S710,检测第二用户根据答复手语动作做出的第二手语动作。Fig. 7 shows a flow chart of detecting a second sign language action according to an embodiment of the present disclosure. First, in step S710, the second sign language action made by the second user according to the reply sign language action is detected.

例如,可以利用图像采集单元采集所述第二手语动作,也可以利用对第二用户配置的可穿戴设备检测手语动作。例如,可以在第二用户的两个手腕上设置手环,配有加速度传感器或者陀螺仪,当第二用户做出第二手语动作时,两个手环可以检测第二用户的手臂动作。例如,通过测量由于重力引起的加速度,可以计算出手环相对于水平面的倾斜角度等;通过分析动态加速度,可以得出手环移动的方式,从而检测第二用户的手部动作。所述手环也可以与图像采集单元配合使用,一起用于检测第二用户的第二手语动作,进一步保证能准确地检测出第二用户做出的手语动作。For example, the image acquisition unit may be used to collect the second sign language action, or a wearable device configured for the second user may be used to detect the sign language action. For example, wristbands can be installed on the two wrists of the second user, equipped with acceleration sensors or gyroscopes, and when the second user makes a second sign language movement, the two wristbands can detect the second user's arm movement. For example, by measuring the acceleration caused by gravity, the inclination angle of the wristband relative to the horizontal plane can be calculated; by analyzing the dynamic acceleration, the way the wristband moves can be obtained, thereby detecting the hand movement of the second user. The wristband can also be used in conjunction with the image acquisition unit to detect the second sign language action of the second user, so as to further ensure that the sign language action of the second user can be accurately detected.

接着,在步骤S720,确定第二手语动作与答复手语动作是否一致,例如,可以基于第二手语动作和答复手语动作的语义信息来确定其是否一致。Next, in step S720, it is determined whether the second sign language action is consistent with the reply sign language action, for example, whether they are consistent can be determined based on the semantic information of the second sign language action and the reply sign language action.

例如,确定所述第二手语动作与答复手语动作是否一致可以包括以下步骤,首先,在步骤S721,基于第二手语动作生成第二语义信息;接着,在步骤S722,基于答复手语动作生成答复语义信息;接着,在步骤S723,基于生成的第二语义信息和答复语义信息来确定第二手语动作与答复手语动作是否一致。例如,与答复手语动作对应的答复语义信息为“我叫张三”,与检测的第二手语动作对应的第二语义信息为“我是第三名”,即确定第二用户做出的第二手语动作的语义与答复手语动作的语义不一致,换句话说,第二用户未能根据答复手语动作准确地做出相应的手语动作。For example, determining whether the second sign language action is consistent with the answer sign language action may include the following steps. First, in step S721, generate second semantic information based on the second sign language action; then, in step S722, generate second semantic information based on the answer sign language action. Reply semantic information; Next, in step S723, determine whether the second sign language action is consistent with the reply sign language action based on the generated second semantic information and answer semantic information. For example, the reply semantic information corresponding to the reply sign language action is "My name is Zhang San", and the second semantic information corresponding to the detected second sign language action is "I am the third", that is, it is determined that the second user made The semantics of the second sign language action is inconsistent with the semantics of the reply sign language action. In other words, the second user fails to accurately perform a corresponding sign language action according to the reply sign language action.

在根据本公开的其他实施例中,也可以直接将检测的第二手语动作与答复手语动作进行对比,来确定第二手语动作与答复手语动作是否一致。例如,可以将图像采集单元获取的第二手语动作分解成单帧来进行分析,利用图像识别技术找到观察目标的位置和姿态,比对第二手语动作与答复手语动作之间的差异,并确定不一致的手语动作。In other embodiments according to the present disclosure, it is also possible to directly compare the detected second sign language action with the reply sign language action to determine whether the second sign language action is consistent with the reply sign language action. For example, the second sign language action acquired by the image acquisition unit can be decomposed into single frames for analysis, using image recognition technology to find the position and posture of the observed target, and comparing the difference between the second sign language action and the reply sign language action, And identify inconsistent sign language movements.

接着,在步骤S730,在第二手语动作与答复手语动作不一致的情况下,输出提示信息。所述输出可以利用显示屏或者头戴显示设备来实现。显示不一致的答复手语动作可以使得第二用户根据其做出相应的第二手语动作,直到第二用户根据答复手语动作准确地做出手语手势。Next, in step S730, if the second sign language action is inconsistent with the reply sign language action, output prompt information. The output can be realized by using a display screen or a head-mounted display device. Displaying inconsistent reply sign language gestures may cause the second user to make corresponding second sign language gestures according to them, until the second user accurately makes sign language gestures according to the reply sign language gestures.

根据本公开实施例,如图1所示的手语交互方法还包括:在确定第二手语动作与答复手语动作不一致的情况下,向第二用户做出提示,从而使得第二用户知晓其做出的手语动作与输出的答复手语动作存在不一致的地方,需要重新做出手语动作。例如,可以利用第二用户佩戴的手环发出提示音,做出震动等以提示第二用户。根据本公开的另一实施例,也可以将不一致的答复手语动作进行放大显示,或者将该不一致的答复手语动作进行局部放大等方式来提示第二用户。According to an embodiment of the present disclosure, the sign language interaction method as shown in FIG. 1 further includes: when it is determined that the second sign language action is inconsistent with the answering sign language action, prompting the second user, so that the second user knows what to do If there is inconsistency between the sign language action and the output reply sign language action, it is necessary to perform the sign language action again. For example, a bracelet worn by the second user may be used to emit a prompt sound, vibrate, etc. to prompt the second user. According to another embodiment of the present disclosure, the inconsistent replying sign language actions may also be enlarged and displayed, or the inconsistent answering sign language actions may be partially enlarged to prompt the second user.

根据本公开实施例,还提供了一种手语交互设备,图8示出了所述手语交互设备800的示意性框图,其可以包括图像采集模块801、处理模块802和输出模块803。其中,所述图像采集模块801用于接收第一用户的第一手语动作;所述处理模块802用于基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;所述输出模块803用于输出所述答复手语动作。其中,所述答复手语动作与所述第一手语动作具有语义上的关联性,所述第一用户可以是手语者。其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。所述手语动作交流库包括手语动作交流样本,所述手语语义交流库包括手语语义交流样本。According to an embodiment of the present disclosure, a sign language interaction device is also provided. FIG. 8 shows a schematic block diagram of the sign language interaction device 800 , which may include an image acquisition module 801 , a processing module 802 and an output module 803 . Wherein, the image acquisition module 801 is used to receive the first sign language action of the first user; the processing module 802 is used to receive the first sign language action based on at least one of the sign language communication library and the input information of the second user and the first sign language action Generate a reply sign language action; the output module 803 is used to output the reply sign language action. Wherein, the reply sign language action is semantically related to the first sign language action, and the first user may be a sign language user. Wherein, the sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library. The sign language action communication library includes sign language action communication samples, and the sign language semantic communication library includes sign language semantic communication samples.

根据本公开的一个实施例,所述处理模块802生成答复手语动作可以包括:将所述第一手语动作与手语动作交流库中的手语动作交流样本进行匹配,然后利用匹配的手语动作交流样本获得答复手语动作。所述处理模块802还用于基于所述第一手语动作生成第一语义信息,基于所述答复手语动作生成答复语义信息。所述输出模块803还用于输出所述第一语义信息和答复语义信息。According to an embodiment of the present disclosure, the processing module 802 generating the reply sign language action may include: matching the first sign language action with the sign language action communication samples in the sign language action communication database, and then using the matched sign language action communication samples to obtain Reply to gestures in sign language. The processing module 802 is further configured to generate first semantic information based on the first sign language action, and generate answer semantic information based on the reply sign language action. The output module 803 is further configured to output the first semantic information and reply semantic information.

根据本公开的另一实施例,所述处理模块802生成答复手语动作也可以包括:基于所述第一手语动作生成第一语义信息,将所述第一语义信息与手语语义交流库中的手语语义交流样本进行匹配。所述处理模块802还用于基于匹配的手语语义交流样本生成答复手语动作和获得答复语义信息。所述输出模块803还用于输出所述第一语义信息和答复语义信息。According to another embodiment of the present disclosure, the processing module 802 generating the reply sign language action may also include: generating first semantic information based on the first sign language action, and combining the first semantic information with the sign language information in the sign language semantic communication library. semantic exchange samples for matching. The processing module 802 is further configured to generate a reply sign language action and obtain reply semantic information based on the matched sign language semantic communication samples. The output module 803 is further configured to output the first semantic information and reply semantic information.

根据本公开的又一实施例,所述处理模块802还用于基于所述第一手语动作生成第一语义信息;所述输出模块803还用于输出所述第一语义信息;所述设备还包括输入模块,用于接收第二用户输入的第二语义信息。然后,所述处理模块802生成答复手语动作也可以包括基于第二语义信息生成答复手语动作。According to yet another embodiment of the present disclosure, the processing module 802 is further configured to generate first semantic information based on the first sign language action; the output module 803 is further configured to output the first semantic information; the device further An input module is included, configured to receive second semantic information input by a second user. Then, the processing module 802 generating a reply sign language action may also include generating a reply sign language action based on the second semantic information.

在根据本公开的实施例中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述第二语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。所述输出模块803可以显示与第一语义信息和答复语义信息对应的文本,或者,也可以播放与第一语义信息和答复语义信息对应的音频信息。In an embodiment according to the present disclosure, the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the second semantic information includes text and/or audio having the same meaning as the answering sign language action. text and/or audio. The output module 803 may display text corresponding to the first semantic information and the reply semantic information, or may also play audio information corresponding to the first semantic information and the reply semantic information.

如图8所示的手语交互设备还可以包括检测模块804。根据本公开的实施例,所述检测模块804可以用于检测第二用户根据所述答复手语动作做出的第二手语动作。所述处理模块802还用于确定所述第二手语动作与答复手语动作是否一致,在确定所述第二手语动作与答复手语动作不一致的情况下,输出提示信息。The sign language interaction device shown in FIG. 8 may further include a detection module 804 . According to an embodiment of the present disclosure, the detection module 804 may be configured to detect a second sign language gesture made by the second user according to the reply sign language gesture. The processing module 802 is further configured to determine whether the second sign language action is consistent with the reply sign language action, and output prompt information if it is determined that the second sign language action is inconsistent with the reply sign language action.

图9示出了根据本公开实施例的检测模块的示意图。如图9所示出的,所述检测模块804可以包括可穿戴设备,例如,在第二用户的两个手腕上设置的手环。所述手环上还可以配置有加速度传感器和陀螺仪,用于检测第二用户在做出第二手语动作时的手部动作,从而实现对第二手语动作的检测。所述检测模块804还可以包括图像采集单元,例如,摄像头(未示出),利用摄像头来采集第二手语动作。根据本公开的一个实施例,所述摄像头可以配置在第二用户的头戴显示设备上。此外,用于采集第二手语动作的所述图像采集单元可以与用于采集第一手语动作的图像采集单元相同。Fig. 9 shows a schematic diagram of a detection module according to an embodiment of the present disclosure. As shown in FIG. 9 , the detection module 804 may include a wearable device, for example, a bracelet set on two wrists of the second user. The wristband can also be equipped with an acceleration sensor and a gyroscope for detecting the hand movement of the second user when making the second sign language movement, so as to realize the detection of the second sign language movement. The detection module 804 may also include an image acquisition unit, for example, a camera (not shown), and use the camera to collect the second sign language action. According to an embodiment of the present disclosure, the camera may be configured on a head-mounted display device of the second user. In addition, the image acquisition unit for acquiring the second sign language action may be the same as the image acquisition unit for acquiring the first sign language action.

根据本公开实施例,所述手语交互设备800中的处理模块802还可以用于在确定所述第二手语动作与答复手语动作不一致的情况下,向第二用户输出提示信息。例如,所述提示信息可以包括对第二用户发出提示音,控制对第二用户配置的手环做出震动,也可以包括控制输出模块803突出显示不一致的答复手语动作,例如,对不一致的手语动作进行标红或者放大显示,从而使得第二用户能基于突出显示的手语动作重新做出第二手语动作,直到完成正确的手语手势。According to an embodiment of the present disclosure, the processing module 802 in the sign language interaction device 800 may also be configured to output prompt information to the second user when it is determined that the second sign language action is inconsistent with the reply sign language action. For example, the prompt information may include sending a prompt sound to the second user, controlling to vibrate the wristband configured by the second user, and may also include controlling the output module 803 to highlight inconsistent reply sign language actions, for example, responding to inconsistent sign language The gestures are marked in red or enlarged, so that the second user can remake the second sign language gesture based on the highlighted sign language gesture until the correct sign language gesture is completed.

如图8所示的手语交互设备可以用于识别第一用户做出的第一手语动作,并根据该手语动作生成对应的答复手语动作,所述答复手语动作可以基于第一手语动作和手语动作交流库直接获得,也可以先将第一手语动作转换成第一语义信息,再基于第一语义信息和手语语义交流库获得,还可以基于第二用户的输入获得。作为非手语者的第二用户可以根据输出的答复手语动作做出相应的第二手语动作,通过检测第二手语动作与答复手语动作是否一致来确定第二用户的手语手势是否准确,从而保证了第二用户与第一用户之间正确、流畅的手语交流。需要注意的是,在根据本公开的实施例中,可以对上述获得答复手语动作的过程进行组合,例如,所述手语交互设备在获取第一用户的第一手语动作之后,可以先利用手语动作交流库和手语语义交流库的至少一种自动地生成答复手语动作,第二用户在接收到自动生成的答复手语动作之后,可以根据其答复语义信息进行判断,该答复手语动作是否符合其答复预期。在此基础上,第二用户可以决定是否采用上述自动生成的答复手语动作。例如,如果第二用户认为该生成的答复手语动作不符合其答复预期,则可以采用输入答复信息的方式获得符合其预期的答复手语动作。The sign language interaction device shown in Figure 8 can be used to recognize the first sign language action made by the first user, and generate a corresponding reply sign language action according to the sign language action, and the reply sign language action can be based on the first sign language action and the sign language action The communication database can be obtained directly, or the first sign language action can be converted into the first semantic information, and then obtained based on the first semantic information and the sign language semantic communication database, or can be obtained based on the input of the second user. The second user who is not a sign language user can make a corresponding second sign language action according to the output reply sign language action, and determine whether the sign language gesture of the second user is accurate by detecting whether the second sign language action is consistent with the answer sign language action, thereby Correct and fluent sign language communication between the second user and the first user is ensured. It should be noted that, in the embodiment according to the present disclosure, the above-mentioned process of obtaining the reply sign language action can be combined. For example, after the sign language interaction device obtains the first sign language action of the first user, it can first use the sign language action At least one of the communication database and the sign language semantic communication database automatically generates a reply sign language action. After the second user receives the automatically generated reply sign language action, he can judge according to the semantic information of his reply whether the reply sign language action meets his answer expectation . On this basis, the second user can decide whether to adopt the above-mentioned automatically generated answer sign language action. For example, if the second user thinks that the generated answering sign language action does not meet his expectation of answering, he may obtain answering sign language action meeting his expectation by inputting answer information.

根据本公开的一个实施例,所述手语交互设备可以实施为增强现实/虚拟现实产品,例如,可以将上述设备800中的模块集成于增强现实(AR)眼镜上,其结构示意图如图10所示。According to an embodiment of the present disclosure, the sign language interaction device can be implemented as an augmented reality/virtual reality product, for example, the modules in the above-mentioned device 800 can be integrated into augmented reality (AR) glasses, and its structural diagram is shown in FIG. 10 Show.

如图10所示的AR眼镜可以由作为非手语者的第二用户佩戴,其上可以配置有摄像头,作为图像采集模块801,可以用于采集第一用户做出的第一手语动作。然后,AR眼镜中的处理模块802(未示出)基于采集的第一手语动作生成答复手语动作。所述答复手语动作可以基于第一手语动作和手语动作交流库直接获得,也可以先将第一手语动作转换成第一语义信息,再基于第一语义信息和手语语义交流库获得,还可以基于第二用户的输入获得,例如,AR眼镜上可以配置有麦克风,用于接收第二用户的语音输入。The AR glasses shown in FIG. 10 can be worn by a second user who is not a sign language speaker, and a camera can be configured on it, as an image collection module 801, which can be used to collect the first sign language action made by the first user. Then, the processing module 802 (not shown) in the AR glasses generates a reply sign language action based on the collected first sign language action. The reply sign language action can be obtained directly based on the first sign language action and the sign language action exchange library, or first convert the first sign language action into the first semantic information, and then obtain it based on the first semantic information and the sign language semantic exchange library, or based on To obtain the input of the second user, for example, a microphone may be configured on the AR glasses to receive the voice input of the second user.

接着,所述AR眼镜显示所述答复手语动作,第二用户可以基于显示的答复手语动作来做出相应的第二手语动作。此外,在获取答复手语动作的过程中,也可以利用AR眼镜显示第一手语动作和答复手语动作的语义信息,以使得第二用户了解当前手语的语义。Next, the AR glasses display the answer sign language action, and the second user can make a corresponding second sign language action based on the displayed answer sign language action. In addition, during the process of obtaining the reply sign language action, AR glasses may also be used to display the semantic information of the first sign language action and the reply sign language action, so that the second user understands the semantics of the current sign language.

在第二用户做出第二手语动作后,所述摄像头也可以用于采集第二手语动作,以用于检测第二手语动作与答复手语动作是否一致。根据本公开的一个实施例,也可以配合使用如图9所示的检测设备来检测第二手语动作,例如,可以利用加速度传感器和陀螺仪分析穿戴设备(例如,手环)移动的方式,并检测手部动作。处理模块802可以基于检测的第二手语动作来确定其与显示的答复手语动作是否一致。在确定第二手语动作与答复手语动作不一致的情况下,处理模块802例如可以控制检测设备震动来提示第二用户,也可以控制AR眼镜突出显示不一致的手语动作,使得提示第二用户重新做出第二手语动作,直到完成正确的手语手势。After the second user makes the second sign language action, the camera can also be used to collect the second sign language action, so as to detect whether the second sign language action is consistent with the reply sign language action. According to an embodiment of the present disclosure, the detection device as shown in FIG. 9 can also be used together to detect the second sign language action. For example, an acceleration sensor and a gyroscope can be used to analyze the movement mode of the wearable device (for example, a bracelet), And detect hand movements. The processing module 802 may determine based on the detected second sign language gesture whether it is consistent with the displayed reply sign language gesture. When it is determined that the second sign language action is inconsistent with the reply sign language action, the processing module 802 may, for example, control the vibration of the detection device to prompt the second user, or control the AR glasses to highlight the inconsistent sign language action, so as to prompt the second user to do it again. Perform second sign language movements until the correct sign language gesture is completed.

根据本公开的实施例,还提供了一种手语交互装置,如图11所示,所述手语交互装置可以包括:至少一个处理器和至少一个存储器。According to an embodiment of the present disclosure, a sign language interaction device is also provided. As shown in FIG. 11 , the sign language interaction device may include: at least one processor and at least one memory.

其中,所述存储器存储有计算机可读代码,所述计算机可读代码当由所述至少一个处理器运行时执行如图1所示的手语交互方法,或者实现如图8所示的手语交互设备。Wherein, the memory stores computer-readable codes, and when the computer-readable codes are executed by the at least one processor, the sign language interaction method as shown in FIG. 1 is executed, or the sign language interaction device as shown in FIG. 8 is realized. .

根据本公开的实施例,还提供了一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如图1所示的手语交互方法。According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium in which computer-readable code is stored, and when the computer-readable code is executed by one or more processors, the computer-readable code is executed as shown in FIG. 1 The sign language interaction method shown.

本公开提供了一种手语交互方法、设备、装置和存储介质,可以用于识别第一用户(例如,手语者)做出的第一手语动作,并根据手语交流库和第二用户的输入信息中的至少一种以及第一手语动作获得对应的答复手语动作,第二用户(例如,非手语者)可以根据显示的答复手语动作来做出相应的第二手语动作。此外,根据本公开,还可以检测第二用户做出的第二手语动作,并将检测的第二手语动作与显示的答复手语动作进行对比,确定第二手语动作与答复手语动作是否一致,从而改进第二用户的第二手语动作,实现非手语者与手语者之间的流畅的手语交流。The present disclosure provides a sign language interaction method, device, device and storage medium, which can be used to recognize the first sign language action made by the first user (for example, sign language person), and according to the sign language communication library and the input information of the second user At least one of them and the first sign language action obtain a corresponding answering sign language action, and the second user (for example, a non-sign language user) can make a corresponding second sign language action according to the displayed answering sign language action. In addition, according to the present disclosure, it is also possible to detect the second sign language action made by the second user, and compare the detected second sign language action with the displayed answering sign language action to determine whether the second sign language action and the answering sign language action Consistent, so as to improve the second sign language action of the second user, and realize smooth sign language communication between non-sign language users and sign language users.

此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。In addition, those skilled in the art will understand that various aspects of the present application may be illustrated and described in several patentable categories or circumstances, including any new and useful process, machine, product or combination of substances, or any combination of them Any new and useful improvements. Correspondingly, various aspects of the present application may be entirely executed by hardware, may be entirely executed by software (including firmware, resident software, microcode, etc.), or may be executed by a combination of hardware and software. The above hardware or software may be referred to as "block", "module", "engine", "unit", "component" or "system". Additionally, aspects of the present application may be embodied as a computer product comprising computer readable program code on one or more computer readable media.

除非另有定义,这里使用的所有术语(包括技术和科学术语)具有与本发明所属领域的普通技术人员共同理解的相同含义。还应当理解,诸如在通常字典里定义的那些术语应当被解释为具有与它们在相关技术的上下文中的含义相一致的含义,而不应用理想化或极度形式化的意义来解释,除非这里明确地这样定义。Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should also be understood that terms such as those defined in common dictionaries should be interpreted as having meanings consistent with their meanings in the context of the relevant technology, and should not be interpreted in idealized or extremely formalized meanings, unless explicitly stated herein defined in this way.

以上是对本发明的说明,而不应被认为是对其的限制。尽管描述了本发明的若干示例性实施例,但本领域技术人员将容易地理解,在不背离本发明的新颖教学和优点的前提下可以对示例性实施例进行许多修改。因此,所有这些修改都意图包含在权利要求书所限定的本发明范围内。应当理解,上面是对本发明的说明,而不应被认为是限于所公开的特定实施例,并且对所公开的实施例以及其他实施例的修改意图包含在所附权利要求书的范围内。本发明由权利要求书及其等效物限定。The foregoing is an illustration of the present invention and should not be considered as a limitation thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the claims. It is to be understood that the above is a description of the invention and should not be considered limited to the particular embodiments disclosed and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (18)

Translated fromChinese
1.一种手语交互方法,包括:1. A sign language interaction method, comprising:接收第一用户的第一手语动作;receiving a first sign language gesture of the first user;基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;generating a reply sign language gesture based on at least one of the sign language communication database and the input information of the second user and the first sign language gesture;输出所述答复手语动作,output said reply sign language action,其中,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。Wherein, the sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library.2.根据权利要求1所述的方法,其中,所述生成答复手语动作包括:2. The method according to claim 1, wherein said generating a reply sign language action comprises:将所述第一手语动作与所述手语动作交流库中的手语动作交流样本进行匹配;Matching the first sign language action with the sign language action communication samples in the sign language action communication library;利用匹配的手语动作交流样本获得答复手语动作。Use the matching sign language gesture communication sample to obtain the reply sign language gesture.3.根据权利要求2所述的方法,还包括:3. The method of claim 2, further comprising:基于所述第一手语动作生成第一语义信息;generating first semantic information based on the first sign language action;基于所述答复手语动作生成答复语义信息;Generate reply semantic information based on the reply sign language action;输出所述第一语义信息和答复语义信息。Outputting the first semantic information and reply semantic information.4.根据权利要求1所述的方法,其中,所述生成答复手语动作包括:4. The method according to claim 1, wherein said generating a reply sign language action comprises:基于所述第一手语动作生成第一语义信息;generating first semantic information based on the first sign language action;将所述第一语义信息与所述手语语义交流库中的手语语义交流样本进行匹配;Matching the first semantic information with the sign language semantic communication samples in the sign language semantic communication database;利用匹配的手语语义交流样本生成答复手语动作。Reply sign language gestures are generated using matched sign language semantic communication samples.5.根据权利要求4所述的方法,还包括:5. The method of claim 4, further comprising:基于所述手语语义交流样本获得答复语义信息;Obtaining reply semantic information based on the sign language semantic communication sample;输出所述第一语义信息和答复语义信息。Outputting the first semantic information and reply semantic information.6.根据权利要求3或5所述的方法,其中,6. The method according to claim 3 or 5, wherein,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述答复语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。The first semantic information includes text and/or audio having the same meaning as the first sign language action, and the answer semantic information includes text and/or audio having the same meaning as the answering sign language action.7.根据权利要求1所述的方法,其中,所述生成答复手语动作包括:7. The method according to claim 1, wherein said generating a reply sign language action comprises:基于所述第一手语动作生成第一语义信息;generating first semantic information based on the first sign language action;输出所述第一语义信息;outputting the first semantic information;接收第二用户输入的第二语义信息;receiving second semantic information input by a second user;基于所述第二语义信息生成答复手语动作。A reply sign language action is generated based on the second semantic information.8.根据权利要求7所述的方法,其中,所述第一语义信息包括具有与所述第一手语动作相同含义的文本和/或音频,所述第二语义信息包括具有与所述答复手语动作相同含义的文本和/或音频。8. The method according to claim 7, wherein the first semantic information includes text and/or audio having the same meaning as the first sign language action, and the second semantic information includes text and/or audio with the same meaning as the reply sign language action. Text and/or audio with the same meaning as the action.9.根据权利要求1所述的方法,还包括:9. The method of claim 1, further comprising:检测第二用户根据所述答复手语动作做出的第二手语动作;detecting a second sign language gesture made by the second user according to the reply sign language gesture;确定所述第二手语动作与答复手语动作是否一致,在第二手语动作与答复手语动作不一致的情况下,输出提示信息。Determine whether the second sign language action is consistent with the reply sign language action, and output prompt information if the second sign language action is inconsistent with the reply sign language action.10.根据权利要求9所述的方法,其中,所述确定第二手语动作与答复手语动作是否一致包括:10. The method according to claim 9, wherein said determining whether the second sign language action is consistent with the reply sign language action comprises:基于所述第二手语动作生成第二语义信息;generating second semantic information based on the second sign language action;基于所述答复手语动作生成答复语义信息;Generate reply semantic information based on the reply sign language action;基于所述第二语义信息和答复语义信息,确定所述第二手语动作与答复手语动作是否一致。Based on the second semantic information and the reply semantic information, determine whether the second sign language action is consistent with the reply sign language action.11.一种手语交互设备,包括:11. A sign language interaction device, comprising:图像采集模块,用于接收第一用户的第一手语动作;An image acquisition module, configured to receive the first sign language action of the first user;处理模块,用于基于手语交流库和第二用户的输入信息中的至少一种以及所述第一手语动作生成答复手语动作;A processing module, configured to generate a reply sign language action based on at least one of the sign language communication library and the second user's input information and the first sign language action;输出模块,用于输出所述答复手语动作,其中,An output module, configured to output the reply sign language action, wherein,所述手语交流库包括手语动作交流库和手语语义交流库中的至少一种。The sign language communication library includes at least one of a sign language action communication library and a sign language semantic communication library.12.根据权利要求11所述的设备,其中,所述处理模块生成答复手语动作包括:将所述第一手语动作与手语动作交流库中的手语动作交流样本进行匹配,利用匹配的手语动作交流样本获得答复手语动作;12. The device according to claim 11, wherein the processing module generating a reply sign language action comprises: matching the first sign language action with a sign language action communication sample in a sign language action communication library, and using the matched sign language action to communicate The sample gets a reply sign language action;所述处理模块还用于基于所述第一手语动作生成第一语义信息,基于所述答复手语动作生成答复语义信息;The processing module is also used to generate first semantic information based on the first sign language action, and generate answer semantic information based on the reply sign language action;所述输出模块还用于输出所述第一语义信息和答复语义信息。The output module is also used to output the first semantic information and reply semantic information.13.根据权利要求11所述的设备,其中,所述处理模块生成答复手语动作包括:基于所述第一手语动作生成第一语义信息,将所述第一语义信息与手语语义交流库中的手语语义交流样本进行匹配;13. The device according to claim 11, wherein the processing module generating a reply sign language action comprises: generating first semantic information based on the first sign language action, combining the first semantic information with the sign language semantic communication library Sign language semantic communication samples for matching;所述处理模块还用于基于匹配的手语语义交流样本生成答复手语动作和获得答复语义信息;The processing module is also used to generate a reply sign language action and obtain reply semantic information based on the matched sign language semantic communication sample;所述输出模块还用于输出所述第一语义信息和答复语义信息。The output module is also used to output the first semantic information and reply semantic information.14.根据权利要求11所述的设备,其中,14. The apparatus of claim 11, wherein,所述处理模块还用于基于所述第一手语动作生成第一语义信息;The processing module is further configured to generate first semantic information based on the first sign language action;所述输出模块还用于输出所述第一语义信息;The output module is also used to output the first semantic information;所述设备还包括输入模块,用于接收第二用户输入的第二语义信息;The device also includes an input module, configured to receive second semantic information input by a second user;其中,所述处理模块生成答复手语动作包括:基于所述第二语义信息生成答复手语动作。Wherein, the processing module generating a reply sign language action includes: generating a reply sign language action based on the second semantic information.15.根据权利要求11所述的设备,还包括检测模块,用于检测第二用户根据所述答复手语动作做出的第二手语动作;15. The device according to claim 11, further comprising a detection module, configured to detect a second sign language gesture made by the second user according to the reply sign language gesture;所述处理模块还用于确定所述第二手语动作与答复手语动作是否一致,在确定所述第二手语动作与答复手语动作不一致的情况下,输出提示信息。The processing module is also used to determine whether the second sign language action is consistent with the reply sign language action, and output prompt information if it is determined that the second sign language action is inconsistent with the reply sign language action.16.根据权利要求15所述的设备,其中,16. The apparatus of claim 15, wherein,所述检测模块包括图像采集单元、手环、加速度传感器和陀螺仪中的至少一种;The detection module includes at least one of an image acquisition unit, a wristband, an acceleration sensor and a gyroscope;所述输出提示信息包括发出提示音、做出震动、控制输出模块突出显示不一致的答复手语动作中的至少一种。The output prompt information includes at least one of making a prompt sound, making a vibration, and controlling the output module to highlight inconsistent reply sign language actions.17.一种手语交互装置,包括:17. A sign language interaction device, comprising:至少一个处理器;和at least one processor; and至少一个存储器,at least one memory,其中,所述存储器存储有计算机可读代码,所述计算机可读代码当由所述至少一个处理器运行时执行如权利要求1-10任一项所述的手语交互方法,或实现如权利要求11-16任一项所述的手语交互设备。Wherein, the memory stores computer-readable codes, and the computer-readable codes execute the sign language interaction method according to any one of claims 1-10 when executed by the at least one processor, or implement the sign language interaction method according to any one of the claims. The sign language interaction device described in any one of 11-16.18.一种非暂时性计算机可读存储介质,其中存储有计算机可读代码,所述计算机可读代码当由一个或多个处理器运行时执行如权利要求1-10任一项所述的手语交互方法。18. A non-transitory computer-readable storage medium in which is stored computer-readable code that, when executed by one or more processors, performs the method according to any one of claims 1-10. Sign language interaction method.
CN201810552444.4A2018-05-312018-05-31Sign language exchange method, unit and storage mediumPendingCN108766127A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810552444.4ACN108766127A (en)2018-05-312018-05-31Sign language exchange method, unit and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810552444.4ACN108766127A (en)2018-05-312018-05-31Sign language exchange method, unit and storage medium

Publications (1)

Publication NumberPublication Date
CN108766127Atrue CN108766127A (en)2018-11-06

Family

ID=64001466

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810552444.4APendingCN108766127A (en)2018-05-312018-05-31Sign language exchange method, unit and storage medium

Country Status (1)

CountryLink
CN (1)CN108766127A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109473016A (en)*2019-01-152019-03-15刘净植It is a kind of novel to listen barrier, deaf and dumb crowd's auxiliary security and information interchange device and method
CN110826441A (en)*2019-10-252020-02-21深圳追一科技有限公司Interaction method, interaction device, terminal equipment and storage medium
CN113573938A (en)*2019-03-252021-10-29大众汽车股份公司 Method for providing linguistic dialogue in sign language in a linguistic dialogue system of a vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0743619A3 (en)*1995-05-181997-02-26Hitachi Ltd Sign language editing apparatus
CN101527092A (en)*2009-04-082009-09-09西安理工大学Computer assisted hand language communication method under special session context
CN102222431A (en)*2010-06-042011-10-19微软公司Hand language translator based on machine
US20110320949A1 (en)*2010-06-242011-12-29Yoshihito OhkiGesture Recognition Apparatus, Gesture Recognition Method and Program
US20120316818A1 (en)*2008-06-302012-12-13International Business Machines CorporationSystem for monitoring multi-orderable measurement data
CN106446861A (en)*2016-09-282017-02-22辽宁石油化工大学 A sign language recognition system, device and method
CN106896914A (en)*2017-01-172017-06-27珠海格力电器股份有限公司Information conversion method and device
CN207249696U (en)*2017-10-242018-04-17贵州师范学院Sign language horizontal detector based on somatosensory recognition technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0743619A3 (en)*1995-05-181997-02-26Hitachi Ltd Sign language editing apparatus
US20120316818A1 (en)*2008-06-302012-12-13International Business Machines CorporationSystem for monitoring multi-orderable measurement data
CN101527092A (en)*2009-04-082009-09-09西安理工大学Computer assisted hand language communication method under special session context
CN102222431A (en)*2010-06-042011-10-19微软公司Hand language translator based on machine
US20110320949A1 (en)*2010-06-242011-12-29Yoshihito OhkiGesture Recognition Apparatus, Gesture Recognition Method and Program
CN106446861A (en)*2016-09-282017-02-22辽宁石油化工大学 A sign language recognition system, device and method
CN106896914A (en)*2017-01-172017-06-27珠海格力电器股份有限公司Information conversion method and device
CN207249696U (en)*2017-10-242018-04-17贵州师范学院Sign language horizontal detector based on somatosensory recognition technology

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109473016A (en)*2019-01-152019-03-15刘净植It is a kind of novel to listen barrier, deaf and dumb crowd's auxiliary security and information interchange device and method
CN113573938A (en)*2019-03-252021-10-29大众汽车股份公司 Method for providing linguistic dialogue in sign language in a linguistic dialogue system of a vehicle
CN110826441A (en)*2019-10-252020-02-21深圳追一科技有限公司Interaction method, interaction device, terminal equipment and storage medium
CN110826441B (en)*2019-10-252022-10-28深圳追一科技有限公司Interaction method, interaction device, terminal equipment and storage medium

Similar Documents

PublicationPublication DateTitle
EP3639051B1 (en)Sound source localization confidence estimation using machine learning
CN108292317B (en)Question and answer processing method and electronic device supporting the same
CN108806669B (en)Electronic device for providing voice recognition service and method thereof
CN113835522A (en)Sign language video generation, translation and customer service method, device and readable medium
CN112667068A (en)Virtual character driving method, device, equipment and storage medium
US10586528B2 (en)Domain-specific speech recognizers in a digital medium environment
WO2019196205A1 (en)Foreign language teaching evaluation information generating method and apparatus
CN110322760B (en) Voice data generation method, device, terminal and storage medium
CN116540972A (en)Method, apparatus, device and storage medium for question and answer
CN107832720B (en)Information processing method and device based on artificial intelligence
CN109086590B (en)Interface display method of electronic equipment and electronic equipment
JP2022510350A (en) Interactive health assessment method and its system
CN113220590A (en)Automatic testing method, device, equipment and medium for voice interaction application
KR101567154B1 (en)Method for processing dialogue based on multiple user and apparatus for performing the same
WO2019021553A1 (en)Information processing device, information processing method, and program
CN113822187A (en) Sign language interpretation, customer service, communication method, apparatus and readable medium
CN108766127A (en)Sign language exchange method, unit and storage medium
CN112232066A (en)Teaching outline generation method and device, storage medium and electronic equipment
CN104361896A (en)Voice quality evaluation equipment, method and system
US20200234187A1 (en)Information processing apparatus, information processing method, and program
CN109740510B (en)Method and apparatus for outputting information
WO2016014597A2 (en)Translating emotions into electronic representations
JP2017211430A (en)Information processing device and information processing method
WO2016206647A1 (en)System for controlling machine apparatus to generate action
JP6798258B2 (en) Generation program, generation device, control program, control method, robot device and call system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20181106

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp