Movatterモバイル変換


[0]ホーム

URL:


CN117788239A - Multi-mode feedback method, device, equipment and storage medium for talent training - Google Patents

Multi-mode feedback method, device, equipment and storage medium for talent training
Download PDF

Info

Publication number
CN117788239A
CN117788239ACN202410201444.5ACN202410201444ACN117788239ACN 117788239 ACN117788239 ACN 117788239ACN 202410201444 ACN202410201444 ACN 202410201444ACN 117788239 ACN117788239 ACN 117788239A
Authority
CN
China
Prior art keywords
feedback
learning
score
talent
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410201444.5A
Other languages
Chinese (zh)
Other versions
CN117788239B (en
Inventor
李翔
赵璧
詹歆
刘慧�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Licheng Education Technology Co ltd
Original Assignee
Xinlicheng Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinlicheng Education Technology Co ltdfiledCriticalXinlicheng Education Technology Co ltd
Priority to CN202410201444.5ApriorityCriticalpatent/CN117788239B/en
Publication of CN117788239ApublicationCriticalpatent/CN117788239A/en
Application grantedgrantedCritical
Publication of CN117788239BpublicationCriticalpatent/CN117788239B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The multi-modal feedback method for the talent training comprises the steps of obtaining input information, generating an original learning plan according to the input information, obtaining first talent training data in the process of executing the original learning plan, analyzing a plurality of talent dimensions of the first talent training data to obtain talent scores corresponding to each talent dimension, adjusting the original learning plan according to the talent scores and the input information to obtain a target learning plan, determining the target learning plan based on individualized input information and the talent scores of the talent dimensions, conducting emotion analysis processing on the target talent training data to obtain emotion analysis processing results, determining multi-modal feedback according to emotion analysis processing results, and conducting multi-modal feedback by combining emotion factors to be beneficial to improving accuracy, diversity and training effects of feedback.

Description

Translated fromChinese
一种口才训练的多模态反馈方法、装置、设备及存储介质A multi-modal feedback method, device, equipment and storage medium for eloquence training

技术领域Technical Field

本申请涉及口才训练领域,尤其涉及一种口才训练的多模态反馈方法、装置、设备及存储介质。The present application relates to the field of eloquence training, and in particular to a multi-modal feedback method, device, equipment and storage medium for eloquence training.

背景技术Background technique

传统的口才培训方法主要依赖于人工教练或简单的自我练习,这些方法通常无法提供准确的反馈和个性化的建议、受限于时间和地点,无法实现随时随地的学习,并且存在以下问题:1. 精确性问题:传统方法无法提供精确的口才表达评估和反馈,导致学习者无法了解自己的弱点和改进点;2. 个性化问题:不同学习者具有不同的口才维度和学习需求,传统方法只能制定固定的学习计划,而无法提供个性化、针对性的学习计划和建议;3.多模态问题:情感分析和多模态反馈在口才培训中的应用有限,传统的方法只能进行基于文字内容表达上的文本反馈,这限制了学习者全面的演讲表现分析和改进;4. 时效性问题:传统面对面培训受时间和地点限制,无法实现随时随地的学习;5. 反馈质量问题:传统培训方法的反馈主观,准确性地,无法提供清晰的改进指导。Traditional eloquence training methods mainly rely on manual coaches or simple self-practice. These methods usually cannot provide accurate feedback and personalized suggestions, are limited by time and location, cannot achieve learning anytime and anywhere, and have the following problems: 1 . Accuracy problem: Traditional methods cannot provide accurate eloquence expression evaluation and feedback, resulting in learners being unable to understand their own weaknesses and improvement points; 2. Personalization problem: Different learners have different eloquence dimensions and learning needs, and traditional methods only Can formulate fixed learning plans, but cannot provide personalized and targeted learning plans and suggestions; 3. Multimodal issues: Emotional analysis and multimodal feedback have limited applications in eloquence training, and traditional methods can only be based on Text feedback on the expression of text content limits learners’ comprehensive speech performance analysis and improvement; 4. Timeliness issues: Traditional face-to-face training is limited by time and location and cannot achieve learning anytime and anywhere; 5. Feedback quality issues: Traditional Feedback on training methods is subjective, inaccurate, and fails to provide clear guidance for improvement.

发明内容Contents of the invention

本申请实施例提供一种口才训练的多模态反馈方法、装置、设备及存储介质,以解决相关技术存在的至少一个问题,技术方案如下:Embodiments of the present application provide a multi-modal feedback method, device, equipment and storage medium for eloquence training to solve at least one problem existing in related technologies. The technical solution is as follows:

第一方面,本申请实施例提供了一种口才训练的多模态反馈的方法,包括:In the first aspect, embodiments of the present application provide a multi-modal feedback method for eloquence training, including:

获取输入信息,根据所述输入信息,生成原始学习计划;Obtain input information and generate an original learning plan based on the input information;

在所述原始学习计划执行的过程中,获取第一口才训练数据;During the execution of the original learning plan, obtain the first eloquence training data;

对所述第一口才训练数据进行若干个口才维度的分析,得到每一所述口才维度对应的口才评分,根据所述口才评分以及所述输入信息,对所述原始学习计划进行调整,得到目标学习计划;The first eloquence training data is analyzed in several eloquence dimensions to obtain the eloquence score corresponding to each of the eloquence dimensions. According to the eloquence score and the input information, the original learning plan is adjusted to obtain target learning plan;

在所述目标学习计划执行的过程中,获取第二口才训练数据,将所述第一口才训练数据或者所述第二口才训练数据作为目标口才训练数据;During the execution of the target learning plan, obtaining second eloquence training data, and using the first eloquence training data or the second eloquence training data as target eloquence training data;

对所述目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,所述多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一。Perform sentiment analysis and processing on the target eloquence training data to obtain sentiment analysis and processing results, and determine multi-modal feedback based on the sentiment analysis and processing results. The multi-modal feedback includes text feedback, sound feedback, visual feedback, and tactile feedback. at least one of.

在一种实施方式中,所述输入信息包括学习目标、学习路径以及可用学习时间,每一所述学习路径包括对应的学习资料,所述根据所述口才评分以及所述输入信息,对所述原始学习计划进行调整,得到目标学习计划包括:In one implementation, the input information includes learning goals, learning paths and available learning time. Each learning path includes corresponding learning materials. According to the eloquence score and the input information, the The original learning plan is adjusted to obtain the target learning plan including:

根据所述学习目标确定学习目标参数、根据所述学习资料确定表征所述学习资料效果的学习资料参数、根据所述原始学习计划确定学习成本参数、根据用户反馈信息或者所述原始学习计划执行的历史数据,确定行为建模参数;Determine a learning target parameter according to the learning target, determine a learning material parameter representing the effect of the learning material according to the learning material, determine a learning cost parameter according to the original learning plan, and determine a behavior modeling parameter according to user feedback information or historical data of the execution of the original learning plan;

根据所述口才评分、所述学习目标参数、所述学习资料参数、所述学习成本参数、所述可用学习时间以及所述行为建模参数,确定所述原始学习计划的计划得分;Determining a plan score for the original learning plan according to the eloquence score, the learning goal parameter, the learning material parameter, the learning cost parameter, the available learning time, and the behavior modeling parameter;

当所述计划得分大于得分阈值时,对所述原始学习计划进行调整,得到目标学习计划。When the plan score is greater than the score threshold, the original learning plan is adjusted to obtain a target learning plan.

在一种实施方式中,所述根据所述口才评分、所述学习目标参数、所述学习资料参数、所述学习成本参数、所述可用学习时间以及所述行为建模参数,确定所述原始学习计划的计划得分包括:In one implementation, the original value is determined based on the eloquence score, the learning target parameter, the learning material parameter, the learning cost parameter, the available learning time and the behavior modeling parameter. Program scores for study programs include:

分别确定所述口才评分、所述学习目标参数、所述学习资料参数、所述学习成本参数以及所述行为建模参数对应的权重参数;Determine weight parameters corresponding to the eloquence score, the learning target parameter, the learning material parameter, the learning cost parameter and the behavior modeling parameter respectively;

根据各个所述权重参数、所述口才评分、所述学习目标参数、所述学习资料参数、所述学习成本参数、所述可用学习时间以及所述行为建模参数进行加权计算,得到所述原始学习计划的计划得分。Perform weighted calculations according to each of the weight parameters, the eloquence score, the learning target parameters, the learning material parameters, the learning cost parameters, the available learning time and the behavior modeling parameters to obtain the original Plan score for study plan.

在一种实施方式中,所述对所述目标口才训练数据进行情感分析处理,得到情感分析处理结果包括:In one implementation, performing sentiment analysis processing on the target eloquence training data and obtaining sentiment analysis processing results includes:

通过情感分析引擎,对所述目标口才训练数据进行多模态情感分析,得到文本情感分析结果、声音情感分析结果以及图像情感分析结果;Through the sentiment analysis engine, perform multi-modal sentiment analysis on the target eloquence training data to obtain text sentiment analysis results, voice sentiment analysis results and image sentiment analysis results;

通过多模态反馈生成器,对所述文本情感分析结果、所述声音情感分析结果以及所述图像情感分析结果进行转化处理,得到情感强度以及情感类型。Through a multi-modal feedback generator, the text emotion analysis results, the voice emotion analysis results and the image emotion analysis results are transformed to obtain emotion intensity and emotion type.

在一种实施方式中,所述根据情感分析处理结果,确定多模态反馈包括:In one implementation, determining the multimodal feedback according to the sentiment analysis processing result includes:

确定所述目标口才训练数据的每一口才维度对应的目标口才评分;Determine the target eloquence score corresponding to each eloquence dimension of the target eloquence training data;

当所述情感类型为正向情感时,确定所述情感类型的目标数值为第一数值,否则确定所述目标数值为第二数值;When the emotion type is a positive emotion, determine the target value of the emotion type to be the first value, otherwise determine the target value to be the second value;

根据所述情感强度,确定多模态反馈对应的情感权重参数,所述情感强度与所述情感权重参数呈正相关;According to the emotional intensity, determine the emotional weight parameter corresponding to the multi-modal feedback, and the emotional intensity is positively correlated with the emotional weight parameter;

根据每一所述目标口才评分、口才评分权重参数、所述目标数值、所述情感强度以及所述情感权重参数进行加权计算,确定多模态反馈的反馈分数;Perform a weighted calculation based on each of the target eloquence scores, the eloquence score weight parameters, the target value, the emotional intensity and the emotional weight parameters to determine the feedback score of the multi-modal feedback;

根据所述反馈分数,进行文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一。According to the feedback score, at least one of text feedback, sound feedback, visual feedback and tactile feedback is performed.

在一种实施方式中,所述根据所述反馈分数,进行文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一包括:In one embodiment, performing at least one of text feedback, audio feedback, visual feedback and tactile feedback based on the feedback score includes:

当所述反馈分数大于反馈阈值时,通过虚拟导师系统反馈正向的文本、反馈正向的语音、反馈正向的图像或者动画、以及反馈第一强度的触觉中的至少之一;When the feedback score is greater than the feedback threshold, at least one of positive text feedback, positive voice feedback, positive image or animation feedback, and first intensity tactile feedback feedback is provided through the virtual tutor system;

当所述反馈分数小于或等于反馈阈值时,通过虚拟导师系统反馈负向的文本、反馈负向的语音、反馈负向的图像或者动画、以及反馈第二强度的触觉中的至少之一;所述第一强度大于所述第二强度。When the feedback score is less than or equal to the feedback threshold, at least one of negative text, negative voice feedback, negative image or animation feedback, and second intensity tactile feedback is fed back through the virtual tutor system; The first intensity is greater than the second intensity.

在一种实施方式中,所述方法还包括:In one embodiment, the method further comprises:

对所述目标口才训练数据进行特征提取,得到若干个口才维度指标以及时间序列口才维度指标;Performing feature extraction on the target eloquence training data to obtain a number of eloquence dimension indicators and time series eloquence dimension indicators;

根据所述口才维度指标以及预设权重,确定第一情感感知指标以及根据所述时间序列口才维度指标以及预设权重,确定第二情感感知指标;Determine a first emotion perception index according to the eloquence dimension index and the preset weight, and determine a second emotion perception index according to the time series eloquence dimension index and the preset weight;

根据所述第一情感感知指标和/或所述第二情感感知指标,确定情绪反馈建议;Determine emotional feedback suggestions according to the first emotion perception indicator and/or the second emotion perception indicator;

实时生成情绪反馈建议,或者,利用强化学习方法,根据所述情绪反馈建议以及奖励函数,确定目标情绪反馈建议并生成目标情绪反馈建议;Generate emotional feedback suggestions in real time, or use reinforcement learning methods to determine target emotional feedback suggestions and generate target emotional feedback suggestions based on the emotional feedback suggestions and reward functions;

或者,or,

计算所述目标口才训练数据的流利度评分以及自信度评分;Calculate the fluency score and confidence score of the target eloquence training data;

根据所述流利度评分以及所述自信度评分,通过虚拟导师系统输出改进建议和/或优势。Based on the fluency score and the confidence score, improvement suggestions and/or advantages are output through the virtual tutor system.

第二方面,本申请实施例提供了一种口才训练的多模态反馈装置,包括:In a second aspect, an embodiment of the present application provides a multimodal feedback device for eloquence training, comprising:

第一获取模块,用于获取输入信息,根据所述输入信息,生成原始学习计划;The first acquisition module is used to acquire input information and generate an original learning plan based on the input information;

第二获取模块,用于在所述原始学习计划执行的过程中,获取第一口才训练数据;The second acquisition module is used to acquire the first eloquence training data during the execution of the original learning plan;

调整模块,用于对所述第一口才训练数据进行若干个口才维度的分析,得到每一所述口才维度对应的口才评分,根据所述口才评分以及所述输入信息,对所述原始学习计划进行调整,得到目标学习计划;The adjustment module is used to analyze several eloquence dimensions of the first eloquence training data, obtain the eloquence score corresponding to each of the eloquence dimensions, and adjust the original learning based on the eloquence score and the input information. Adjust the plan to obtain the target learning plan;

第三获取模块,用于在所述目标学习计划执行的过程中,获取第二口才训练数据,将所述第一口才训练数据或者所述第二口才训练数据作为目标口才训练数据;A third acquisition module, configured to acquire second eloquence training data during the execution of the target learning plan, and use the first eloquence training data or the second eloquence training data as target eloquence training data;

反馈模块,用于对所述目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,所述多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一。A feedback module is used to perform sentiment analysis and processing on the target eloquence training data, obtain sentiment analysis and processing results, and determine multi-modal feedback based on the sentiment analysis and processing results. The multi-modal feedback includes text feedback, voice feedback, visual feedback, and text feedback. at least one of feedback and tactile feedback.

在一种实施方式中,所述反馈模块还用于:In one implementation, the feedback module is also used to:

对所述目标口才训练数据进行特征提取,得到若干个口才维度指标以及时间序列口才维度指标;Performing feature extraction on the target eloquence training data to obtain a number of eloquence dimension indicators and time series eloquence dimension indicators;

根据所述口才维度指标以及预设权重,确定第一情感感知指标以及根据所述时间序列口才维度指标以及预设权重,确定第二情感感知指标;Determine a first emotion perception indicator based on the eloquence dimension indicator and the preset weight, and determine a second emotion perception indicator based on the time series eloquence dimension indicator and the preset weight;

根据所述第一情感感知指标和/或所述第二情感感知指标,确定情绪反馈建议;Determine emotional feedback suggestions according to the first emotion perception indicator and/or the second emotion perception indicator;

实时生成情绪反馈建议,或者,利用强化学习方法,根据所述情绪反馈建议以及奖励函数,确定目标情绪反馈建议并生成目标情绪反馈建议;Generate emotional feedback suggestions in real time, or use reinforcement learning methods to determine target emotional feedback suggestions and generate target emotional feedback suggestions based on the emotional feedback suggestions and reward functions;

或者,or,

计算所述目标口才训练数据的流利度评分以及自信度评分;Calculating the fluency score and confidence score of the target eloquence training data;

根据所述流利度评分以及所述自信度评分,通过虚拟导师系统输出改进建议和/或优势。Based on the fluency score and the confidence score, improvement suggestions and/or advantages are output through the virtual tutor system.

第三方面,本申请实施例提供了一种电子设备,包括:处理器和存储器,该存储器中存储指令,该指令由该处理器加载并执行,以实现上述各方面任一种实施方式中的方法。In a third aspect, embodiments of the present application provide an electronic device, including: a processor and a memory. Instructions are stored in the memory, and the instructions are loaded and executed by the processor to implement any of the embodiments of the above aspects. method.

第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被执行时实现上述各方面任一种实施方式中的方法。In the fourth aspect, embodiments of the present application provide a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed, the method in any embodiment of the above aspects is implemented.

上述技术方案中的有益效果至少包括:The beneficial effects of the above technical solution include at least:

通过获取输入信息,根据输入信息,生成原始学习计划,在原始学习计划执行的过程中,获取第一口才训练数据,对第一口才训练数据进行若干个口才维度的分析,得到每一口才维度对应的口才评分,根据口才评分以及输入信息,对原始学习计划进行调整,得到目标学习计划,能够基于个性化的输入信息以及口才维度的口才评分确定目标学习计划,有利于适应不同用户的需求,在目标学习计划执行的过程中获取第二口才训练数据,将第一口才训练数据或者第二口才训练数据作为目标口才训练数据,对目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一,结合情感因素进行多模态反馈,有利于提高反馈的准确性以及多样性,有利于提高用户的训练效果。By obtaining input information, an original learning plan is generated based on the input information. During the execution of the original learning plan, the first eloquence training data is obtained, and the first eloquence training data is analyzed in several eloquence dimensions to obtain each eloquence According to the eloquence score corresponding to the dimension, the original learning plan is adjusted according to the eloquence score and the input information to obtain the target learning plan. The target learning plan can be determined based on the personalized input information and the eloquence score of the eloquence dimension, which is conducive to adapting to the needs of different users. , obtain the second eloquence training data during the execution of the target learning plan, use the first eloquence training data or the second eloquence training data as the target eloquence training data, perform sentiment analysis and processing on the target eloquence training data, and obtain the sentiment analysis processing results , and determine multi-modal feedback based on the emotional analysis processing results. Multi-modal feedback includes at least one of text feedback, voice feedback, visual feedback and tactile feedback. Multi-modal feedback combined with emotional factors is conducive to improving the effectiveness of feedback. Accuracy and diversity are conducive to improving the user's training effect.

上述概述仅仅是为了说明书的目的,并不意图以任何方式进行限制。除上述描述的示意性的方面、实施方式和特征之外,通过参考附图和以下的详细描述,本申请进一步的方面、实施方式和特征将会是容易明白的。The above summary is for illustrative purposes only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments and features described above, further aspects, embodiments and features of the present application will be readily apparent by reference to the accompanying drawings and the following detailed description.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

在附图中,除非另外规定,否则贯穿多个附图相同的附图标记表示相同或相似的部件或元素。这些附图不一定是按照比例绘制的。应该理解,这些附图仅描绘了根据本申请公开的一些实施方式,而不应将其视为是对本申请范围的限制。In the drawings, unless otherwise specified, the same reference numbers refer to the same or similar parts or elements throughout the several figures. The drawings are not necessarily to scale. It should be understood that these drawings depict only some embodiments disclosed in accordance with the present application and should not be considered as limiting the scope of the present application.

图1为本申请一实施例口才训练的多模态反馈方法的步骤流程示意图;Figure 1 is a schematic flowchart of the steps of a multi-modal feedback method for eloquence training according to an embodiment of the present application;

图2为本申请一实施例的口才训练的多模态反馈装置的结构框图;Figure 2 is a structural block diagram of a multi-modal feedback device for eloquence training according to an embodiment of the present application;

图3为本申请一实施例的电子设备的结构框图。FIG. 3 is a structural block diagram of an electronic device according to an embodiment of the present application.

具体实施方式Detailed ways

在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本申请的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。In the following, only some exemplary embodiments are briefly described. As those skilled in the art will appreciate, the described embodiments may be modified in various ways without departing from the spirit or scope of the present application. Therefore, the drawings and descriptions are considered to be exemplary and non-restrictive in nature.

参照图1,示出本申请一实施例的口才训练的多模态反馈方法的流程图,该口才训练的多模态反馈方法至少可以包括步骤S100-S500:1 , a flow chart of a multimodal feedback method for eloquence training according to an embodiment of the present application is shown. The multimodal feedback method for eloquence training may include at least steps S100-S500:

S100、获取输入信息,根据输入信息,生成原始学习计划。S100. Obtain input information and generate an original learning plan based on the input information.

S200、在原始学习计划执行的过程中,获取第一口才训练数据。S200. During the execution of the original learning plan, obtain the first eloquence training data.

S300、对第一口才训练数据进行若干个口才维度的分析,得到每一口才维度对应的口才评分,根据口才评分以及输入信息,对原始学习计划进行调整,得到目标学习计划。S300. Analyze several eloquence dimensions of the first eloquence training data to obtain the eloquence score corresponding to each eloquence dimension. Based on the eloquence score and input information, adjust the original learning plan to obtain the target learning plan.

S400、在目标学习计划执行的过程中,获取第二口才训练数据,将第一口才训练数据或者第二口才训练数据作为目标口才训练数据。S400. During the execution of the target learning plan, second eloquence training data is obtained, and the first eloquence training data or the second eloquence training data is used as target eloquence training data.

S500、对目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一。S500. Perform emotional analysis and processing on the target eloquence training data, obtain the emotional analysis and processing results, and determine multi-modal feedback based on the emotional analysis and processing results. The multi-modal feedback includes at least one of text feedback, sound feedback, visual feedback and tactile feedback. one.

本申请实施例的口才训练的多模态反馈方法可以通过计算机、手机、平板、车载终端等终端的电子控制单元、控制器、处理器等执行,也可以通过云服务器执行,例如通过云服务器的系统执行。The multi-modal feedback method for eloquence training in the embodiment of the present application can be executed through the electronic control unit, controller, processor, etc. of terminals such as computers, mobile phones, tablets, and vehicle-mounted terminals, or can also be executed through a cloud server, for example, through a cloud server. System execution.

本申请实施例的技术方案,通过获取输入信息,根据输入信息,生成原始学习计划,在原始学习计划执行的过程中,获取第一口才训练数据,对第一口才训练数据进行若干个口才维度的分析,得到每一口才维度对应的口才评分,根据口才评分以及输入信息,对原始学习计划进行调整,得到目标学习计划,能够基于个性化的输入信息以及口才维度的口才评分确定目标学习计划,有利于适应不同用户的需求,在目标学习计划执行的过程中获取第二口才训练数据,将第一口才训练数据或者第二口才训练数据作为目标口才训练数据,对目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一,结合情感因素进行多模态反馈,有利于提高反馈的准确性以及多样性,有利于提高用户的训练效果。The technical solution of the embodiment of the present application is to obtain input information and generate an original learning plan based on the input information. During the execution of the original learning plan, the first eloquence training data is obtained, and several eloquence training data are performed on the first eloquence training data. Through dimensional analysis, the eloquence score corresponding to each eloquence dimension is obtained. Based on the eloquence score and input information, the original learning plan is adjusted to obtain the target learning plan. The target learning plan can be determined based on the personalized input information and the eloquence score of the eloquence dimension. , which is conducive to adapting to the needs of different users, obtaining the second eloquence training data during the execution of the target learning plan, using the first eloquence training data or the second eloquence training data as the target eloquence training data, and performing emotional evaluation on the target eloquence training data. Analyze and process, obtain the emotional analysis processing results, and determine multi-modal feedback based on the emotional analysis processing results. The multi-modal feedback includes at least one of text feedback, sound feedback, visual feedback, and tactile feedback, and performs multi-modal feedback based on emotional factors. Dynamic feedback is conducive to improving the accuracy and diversity of feedback, and is conducive to improving the user's training effect.

在一种实施方式中,用户启动系统,可以进行登录或者注册,然后进入到系统中,然后用户可以在系统中根据实际需要进行输入操作,供系统获取输入信息,输入信息包括但不限于学习目标、学习路径、可用学习时间等等,系统中每一学习路径包括对应的学习资料,学习路径包括但不限于发言自信度提升、演讲技巧改进、说服力提升等等。当用户选择了某个学习路径进行个性化学习后,系统确定对应的学习资料如课程以及教材等,学习目标可以具有一个或多个供用户选择,例如包括但不限于讲技能提高、职业发展目标或语言水平提升等。本申请实施例中,系统在获取输入信息后,显示一个总结页面供用户确认输入信息是否正确,确认正确后自动根据输入信息生成个性化的原始学习计划,包括学习的时间表、学习资料(如视频课程、文章、演讲案例等)、练习等内容以及练习频率、难度级别,以帮助用户逐步提高口才表达能力,通过全息的虚拟导师系统的学习环境开始口才训练。In one implementation, the user starts the system, logs in or registers, and then enters the system. The user can then perform input operations in the system according to actual needs for the system to obtain input information. The input information includes but is not limited to learning objectives. , learning path, available learning time, etc. Each learning path in the system includes corresponding learning materials. The learning path includes but is not limited to improving speaking confidence, improving speech skills, improving persuasiveness, etc. When the user selects a certain learning path for personalized learning, the system determines the corresponding learning materials such as courses and textbooks. There can be one or more learning goals for the user to choose from, including but not limited to skill improvement and career development goals. Or language proficiency improvement, etc. In the embodiment of this application, after obtaining the input information, the system displays a summary page for the user to confirm whether the input information is correct. After confirmation, it automatically generates a personalized original learning plan based on the input information, including a learning schedule, learning materials (such as Video courses, articles, speech cases, etc.), exercises and other content, as well as practice frequency and difficulty level, to help users gradually improve their eloquence expression skills and start eloquence training through the learning environment of the holographic virtual tutor system.

在一种实施方式中,执行原始学习计划时,用户根据学习资料口才训练学习,在该过程中,系统会获取到用户的训练数据作为第一口才训练数据。需要说明的是,第一口才训练数据可以包括拍摄的图片、视频或者录音等数据。In one implementation, when executing the original learning plan, the user performs eloquence training and learning based on the learning materials. During this process, the system obtains the user's training data as the first eloquence training data. It should be noted that the first eloquence training data can include captured pictures, videos, or audio recordings.

在一种实施方式中,口才维度包括但不限于流利度、自信度、语言表达能力以及姿势和身体语言,流利度(Fluency)可以通过计算演讲中断的频率和持续时间来评估,自信度(Confidence)可以通过自然语言处理(NLP)进行声音分析和情感分析得出,语言表达能力(Articulation)可以通过发音准确性和词汇多样性来评估,姿势和身体语言(BodyLanguage)可以通过全息投影技术计算身体运动的协调性和面部表情的多样性来评估,例如每一口才维度对应的口才评分计算公式如下:In one implementation, eloquence dimensions include but are not limited to fluency, confidence, language expression ability, and posture and body language. Fluency (Fluency) can be evaluated by calculating the frequency and duration of speech interruptions, and Confidence (Confidence) ) can be obtained through sound analysis and emotion analysis through natural language processing (NLP), language expression ability (Articulation) can be evaluated through pronunciation accuracy and vocabulary diversity, posture and body language (BodyLanguage) can be calculated through holographic projection technology To evaluate the coordination of movements and the diversity of facial expressions, for example, the eloquence score calculation formula corresponding to each eloquence dimension is as follows:

Fluency=TotalSpeechDuration/(1+PauseCount+PauseDuration)Fluency=TotalSpeechDuration/(1+PauseCount+PauseDuration)

Confidence=PitchRange+EmotionScore/2Confidence=PitchRange+EmotionScore/2

Articulation=(PronunciationAccuracy×VocabularyDiversity)/100Articulation=(PronunciationAccuracy×VocabularyDiversity)/100

BodyLanguage=(BodyCoordination+FacialExpressionDiversity)/2BodyLanguage=(BodyCoordination+FacialExpressionDiversity)/2

其中,PauseCount为演讲中断次数、PauseDuration为演讲中断的总持续时间、TotalSpeechDuration为演讲总持续时间,可以通过对第一口才训练数据进行分析得到;PitchRange为声音音调的变化范围,EmotionScore为情感分析得分,可以通过深度学习模型对第一口才训练数据进行分析处理得到;PronunciationAccuracy为发音准确性,通过深度学习模型识别用户的发音是否准确来评估,VocabularyDiversity为词汇多样性,通过文本分析模型分析得到;BodyCoordination为身体运动的协调性,通过监测用户的姿势和动作来评估,FacialExpressionDiversity为面部表情的多样性,通过分析用户的面部表情来评估,均可以通过图像处理算法分析得到。Among them, PauseCount is the number of speech interruptions, PauseDuration is the total duration of speech interruptions, and TotalSpeechDuration is the total duration of speech, which can be obtained by analyzing the first eloquence training data; PitchRange is the range of changes in voice pitch, and EmotionScore is the emotion analysis score. , which can be obtained by analyzing and processing the first eloquence training data through the deep learning model; Pronunciation Accuracy is the pronunciation accuracy, which is evaluated by identifying whether the user's pronunciation is accurate through the deep learning model, and VocabularyDiversity is the vocabulary diversity, which is obtained by analyzing the text analysis model; BodyCoordination is the coordination of body movements, which is evaluated by monitoring the user's posture and movements. FacialExpressionDiversity is the diversity of facial expressions, which is evaluated by analyzing the user's facial expressions. Both can be analyzed through image processing algorithms.

在一些实施方式中,还可以计算上述评分的总综合评分,为每一个口才维度配置分配口才维度权重,然后进行加权求和,得到口才维度的总综合评分。In some implementations, the total comprehensive score of the above scores can also be calculated, the eloquence dimension weight is assigned to each eloquence dimension configuration, and then a weighted sum is performed to obtain the total comprehensive score of the eloquence dimension.

在一些实施方式中,步骤S300中根据口才评分以及输入信息,对原始学习计划进行调整,得到目标学习计划包括步骤S310-S330:In some embodiments, in step S300, the original learning plan is adjusted according to the eloquence score and the input information, and obtaining the target learning plan includes steps S310-S330:

S310、根据学习目标确定学习目标参数、根据学习资料确定表征学习资料效果的学习资料参数、根据原始学习计划确定学习成本参数、根据用户反馈信息或者原始学习计划执行的历史数据,确定行为建模参数。S310. Determine the learning target parameters according to the learning objectives, determine the learning material parameters that represent the effect of the learning materials based on the learning materials, determine the learning cost parameters based on the original learning plan, and determine the behavior modeling parameters based on user feedback information or historical data of the execution of the original learning plan. .

可选地,(1)、根据学习目标(一个或多个)确定学习目标参数gj,学习目标可以根据它们的优先级、相关性或难度等因素进行量化,确定得分即学习目标参数gj。例如:Optionally, (1), determine the learning goal parameter gj according to the learning goal(s). The learning goals can be quantified according to factors such as their priority, relevance or difficulty, and determine the score, that is, the learning goal parameter gj . For example:

优先级: 给予每个学习目标一个优先级得分,配置学习目标时用户可以选择对应的优先级,如紧急目标分配5分,重要但不紧急的目标分配3分,普通目标分配1分;Priority: Give each learning goal a priority score. When configuring learning goals, users can choose the corresponding priority. For example, urgent goals are assigned 5 points, important but not urgent goals are assigned 3 points, and ordinary goals are assigned 1 point;

相关性: 如果学习目标与用户当前能力水平高度相关,如学习目标可以提高口才维度(对应的口才评分越低,则学习目标与用户当前能力水平高度相关性越高),则基于相关度分配数值,如相关性高分配10分,中等相关分配5分,低相关分配1分。Relevance: If the learning goal is highly relevant to the user's current ability level, such as the learning goal can improve the eloquence dimension (the lower the corresponding eloquence score, the higher the correlation between the learning goal and the user's current ability level), a numerical value is assigned based on the relevance, such as 10 points for high relevance, 5 points for medium relevance, and 1 point for low relevance.

难度: 目标的难度可以通过预估完成目标所需的时间或努力来量化,如高难度目标分配10分,中等难度分配5分,低难度分配1分。Difficulty: The difficulty of a goal can be quantified by estimating the time or effort required to complete the goal. For example, a high-difficulty goal is assigned 10 points, a medium-difficulty goal is assigned 5 points, and a low-difficulty goal is assigned 1 point.

可选地,(2)、根据学习资料确定表征学习资料效果的学习资料参数mk,学习资料可以根据其适用性、易理解性和用户反馈等因素进行量化,确定得分即学习资料参数mk。例如:Optionally, (2) a learning material parameter mk representing the effect of the learning material is determined based on the learning material. The learning material can be quantified based on factors such as its applicability, comprehensibility, and user feedback to determine a score, i.e., the learning material parameter mk . For example:

适用性: 根据资料与学习目标的匹配度来评分,完全匹配分配10分,部分匹配分配5分,不匹配分配0分。Applicability: Scoring is based on the match between the material and the learning objectives. A complete match is assigned 10 points, a partial match is assigned 5 points, and a mismatch is assigned 0 points.

易理解性: 根据资料的复杂程度,易于理解的资料分配高分,难以理解的资料分配低分,如简单资料10分,复杂资料1分。Comprehensibility: According to the complexity of the information, easy-to-understand information is assigned a high score, and difficult-to-understand information is assigned a low score, such as 10 points for simple information and 1 point for complex information.

用户反馈: 根据历史用户反馈,评价资料的有效性,高效资料分配10分,效果一般分配5分,效果差分配1分。User feedback: Based on historical user feedback, the effectiveness of the data is evaluated. High-efficiency data is assigned 10 points, average effect is assigned 5 points, and poor effect is assigned 1 point.

可选地,(3)、根据原始学习计划确定学习成本参数(例如,时间、资源等成本,学习材料的难易程度、学习速度、学习目标要求的高与低决定时间成本),学习成本可以根据时间成本、资源消耗等进行量化,确定得分即学习资料参数/>。例如:Optionally, (3), determine the learning cost parameters according to the original learning plan (For example, the cost of time, resources, etc., the difficulty of learning materials, learning speed, high or low learning target requirements determine the time cost), the learning cost can be quantified based on time cost, resource consumption, etc., and the score is determined as the learning material parameters /> . For example:

时间成本: 根据完成学习任务预计所需的时间,时间短分配低分(如1小时内完成分配1分),时间长分配高分(如超过10小时完成分配10分),可以设置多个时间区间对应的分数。Time cost: Based on the estimated time required to complete the learning task, low points are allocated for short time (for example, 1 point is allocated for completion within 1 hour), and high points are allocated for long time (for example, 10 points are allocated for completion over 10 hours). Multiple times can be set. The score corresponding to the interval.

资源消耗: 根据完成学习任务所需的资源(如金钱、物资等),如以金钱为例,消耗少分配低分(如10元内分配1分),消耗多分配高分(如超过1000元分配10分),可以设置多个消耗区间对应的分数。Resource consumption: According to the resources (such as money, materials, etc.) required to complete the learning task, for example, taking money as an example, low points will be allocated if the consumption is small (such as 1 point is allocated within 10 yuan), and high points will be allocated if the consumption is more than 1,000 yuan. (allocate 10 points), you can set points corresponding to multiple consumption intervals.

可选地,(4)、根据用户反馈信息或者原始学习计划执行的历史数据,确定行为建模参数(例如可以包括历史数据确定的用户学习偏好、速度、记忆力等行为的因素或者用户直接输入的偏好因素),行为建模可以根据用户的学习偏好、速度和记忆力等进行量化,确定得分即行为建模参数/>。例如:Optionally, (4), determine behavior modeling parameters based on user feedback information or historical data of original learning plan execution (For example, it can include behavioral factors such as user learning preference, speed, and memory determined by historical data or preference factors directly input by the user). Behavior modeling can be quantified based on the user's learning preference, speed, memory, etc., and the determined score is the behavior model. Module parameters/> . For example:

学习偏好: 根据用户选择的学习路径中进一步选择的学习方式分配分数,偏好视频学习的分配10分,偏好阅读的分配5分,没有特定偏好的分配1分。Learning preference: Points are assigned based on the learning method further selected in the learning path selected by the user. 10 points are assigned for preference to video learning, 5 points for preference for reading, and 1 point for no specific preference.

学习速度: 根据历史数据确定当前用户或者所有用户完成相同或相似学习任务(学习目标)的平均时间,快速学习者分配高分(1天内掌握新技能分配10分),慢速学习者分配低分(超过1周才掌握分配1分),可以设置多个阈值区间,以区分是快速还是慢速。Learning speed: Determine the average time for the current user or all users to complete the same or similar learning tasks (learning goals) based on historical data. Fast learners are assigned high scores (10 points for mastering new skills within 1 day) and slow learners are assigned low scores (1 point for mastering skills after more than 1 week). You can set multiple threshold intervals to distinguish between fast and slow learners.

记忆力: 根据用户对学过内容的回忆能力,记忆力强的分配高分(能完整回忆90%以上内容分配10分),记忆力弱的分配低分(只能回忆50%以下内容分配1分)。Memory: Based on the user's ability to recall the content they have learned, those with strong memory will be assigned high scores (10 points if they can recall more than 90% of the content completely), and low scores will be assigned to those with weak memory (1 point if they can only recall less than 50% of the content).

S320、根据口才评分、学习目标参数、学习资料参数、学习成本参数、可用学习时间以及行为建模参数,确定原始学习计划的计划得分。S320: Determine the plan score of the original learning plan according to the eloquence score, the learning goal parameter, the learning material parameter, the learning cost parameter, the available learning time, and the behavior modeling parameter.

可选地,原始学习计划的计划得分P 的计算公式为:Optionally, the calculation formula of the plan score P of the original learning plan is:

+/>+/>-/>+/>+f(P’,T)) +/> +/> -/> +/> +f (P' ,T ))

可选地,口才评分、学习目标参数、学习资料参数、学习成本参数以及行为建模参数均具有对应的权重参数、/>、/>、/>、/>,然后利用上述公式加权计算即可以得到原始学习计划的计划得分P。其中,/>为第i个口才维度的口才评分,/>为第i个口才维度的口才维度权重,/>为第j个学习目标的学习目标参数的权重,/>为第k个学习资料(如课程等)的学习资料参数/>的权重,/>为第/>个成本的学习成本参数/>的权重,/>为第m个因素对应的建模参数/>的权重,T为可用学习时间(可取分钟或秒),f(P’,T)为一个代表评估和优化学习计划的函数,用于评估学习计划的质量,例如可以为事先训练的深度学习模型的输出函数,P’包括口才评分、学习目标参数、学习资料参数、学习成本参数以及行为建模参数等内容,结合T输入至深度学习模型后可以得到一个评分结果f(P’,T),这个函数的目标是确保学习计划能够满足用户的需求,提高学习效果,并提供个性化的学习体验,可考虑学习计划的综合性、用户满意度、目标达成度等因素。Optionally, the eloquence score, learning target parameters, learning material parameters, learning cost parameters and behavior modeling parameters all have corresponding weight parameters. ,/> ,/> ,/> ,/> , and then use the weighted calculation of the above formula to obtain the plan score P of the original learning plan. Among them,/> Is the eloquence score of the i-th eloquence dimension,/> is the eloquence dimension weight of the i-th eloquence dimension,/> is the weight of the learning target parameter of the j-th learning target,/> It is the learning material parameter of the kth learning material (such as courses, etc.)/> The weight of /> For the first/> Learning cost parameter of cost/> The weight of /> is the modeling parameter corresponding to the m-th factor/>TheweightofTheoutputfunctionof The goal of this function is to ensure that the learning plan can meet the needs of users, improve learning effects, and provide a personalized learning experience. Factors such as the comprehensiveness of the learning plan, user satisfaction, and goal achievement can be considered.

S330、当计划得分大于得分阈值时,对原始学习计划进行调整,得到目标学习计划。S330: When the plan score is greater than the score threshold, the original learning plan is adjusted to obtain a target learning plan.

可选地,得分阈值可以基于实际设定,例如得分阈值为0.5,当计划得分大于0.5时,认为当前的原始学习计划整体方向正确,基本有效可执行,只需要作出一定的细微调整,因此对原始学习计划进行调整后可以得到目标学习计划。例如,可以设置得分子阈值,所当计划得分中某一部分的得分小于该得分子阈值,则对部分进行调整,例如的得分小于得分子阈值,此时可以增加对口才维度训练的相关的学习资料以及练习,采用推荐算法从系统的学习资源库中选择合适的学习资料;/>的得分小于得分子阈值,例如如果用户的学习目标是提高情感感染力,那么计划中的学习资料可以更加侧重于情感感染力的相关内容,添加更多与学习目标相关的学习资料,其他部分的调整同理,还可以学习材料、用户的可用学习时间,系统可以确定学习计划中包含哪些材料和练习,并安排学习时间的分配等;根据学习计划的成本、建模参数,系统可以考虑用户的学习偏好、速度、记忆力等因素,以确保计划的可行性和用户的舒适度,不再赘述。例如:也可以不设置阈值而比较哪部分的得分较低,然后确定调整的对象和方向,示例性地:Optionally, the score threshold can be based on actual settings. For example, the score threshold is 0.5. When the plan score is greater than 0.5, the current original learning plan is considered to be in the correct overall direction and is basically effective and executable. Only certain minor adjustments need to be made, so the The target learning plan can be obtained after adjusting the original learning plan. For example, you can set a score sub-threshold. When the score of a certain part of the plan score is less than the score sub-threshold, the part will be adjusted, for example The score is less than the scorer threshold. At this time, you can add relevant learning materials and exercises for eloquence dimension training, and use the recommendation algorithm to select appropriate learning materials from the system's learning resource library;/> The score is less than the scorer threshold. For example, if the user's learning goal is to improve emotional appeal, then the planned learning materials can focus more on content related to emotional appeal, add more learning materials related to the learning goals, and other parts of the In the same way, you can also adjust the learning materials and the user's available learning time. The system can determine which materials and exercises are included in the learning plan and arrange the allocation of learning time. According to the cost and modeling parameters of the learning plan, the system can consider the user's Factors such as learning preference, speed, memory, etc., to ensure the feasibility of the plan and user comfort will not be discussed in detail. For example: You can also compare which part has a lower score without setting a threshold, and then determine the object and direction of adjustment. For example:

学习目标调整: 如果分析结果显示学习目标参数 gj对总得分贡献较低,可能是因为目标设置不够具体、不够挑战性或与用户的实际需求不匹配。此时,可以通过增加或减少学习目标的数量、调整目标的难度或更改目标的具体内容来进行调整;Learning goal adjustment: If the analysis results show that the learning goal parametergj contributes less to the total score, it may be because the goal setting is not specific enough, not challenging enough, or does not match the actual needs of the user. At this time, adjustments can be made by increasing or decreasing the number of learning goals, adjusting the difficulty of the goals, or changing the specific content of the goals;

学习资料调整: 如果学习资料参数 mk对总得分贡献较低,可能是因为选用的学习材料与学习目标不匹配、难度不适中或用户不感兴趣。此时,应根据用户的反馈和学习效果,更换或调整学习材料的类型、难度和内容。Learning material adjustment: If the learning material parameter mk contributes low to the total score, it may be because the selected learning materials do not match the learning objectives, the difficulty is not suitable, or the user is not interested. At this time, the type, difficulty, and content of learning materials should be replaced or adjusted based on user feedback and learning effects.

学习成本调整: 如果学习成本参数对总得分贡献较低,可能是因为学习计划要求的时间、精力或其他资源超出了用户的可接受范围。此时,通过调整学习计划的强度、频率或持续时间来降低成本。Learning cost adjustment: If the learning cost parameter A low contribution to the overall score may be due to the learning plan requiring more time, effort, or other resources than the user is comfortable with. At this point, reduce costs by adjusting the intensity, frequency, or duration of your learning program.

行为建模参数调整: 如果行为建模参数对总得分贡献较低,可能是因为学习计划没有很好地适应用户的学习偏好、速度或记忆能力。此时,可以通过调整学习方法、提供个性化的学习路径或增加复习和练习的环节来提高适应性。Behavioral modeling parameters adjustment: If the behavior modeling parameters The lower contribution to the overall score may be due to the study plan not being well adapted to the user's learning preferences, speed, or memory abilities. At this time, adaptability can be improved by adjusting learning methods, providing personalized learning paths, or adding review and practice sessions.

通过以上的调整从而实现对原始学习计划进行具体调整,得到目标学习计划并实施新的目标学习计划。在执行目标学习计划的过程中,持续收集用户的反馈和学习效果数据,以便后续进一步优化学习计划。Through the above adjustments, specific adjustments can be made to the original learning plan, the target learning plan can be obtained, and the new target learning plan can be implemented. In the process of executing the target learning plan, user feedback and learning effect data are continuously collected to further optimize the learning plan.

可以理解的是,如果计划得分小于或者等于0.5时,说明原始学习计划无效,学习计划效果远远不达标,需要进行较大幅度的调整,此时,需要与用户进行深入沟通,了解其真正的需求和期望,提示用户重新设定学习目标,确保学习目标的相关性和可达成性,然后进行全面审视和重构,实现原始学习计划的调整,得到目标学习计划。例如:It is understandable that if the plan score is less than or equal to 0.5, it means that the original learning plan is invalid, the effect of the learning plan is far from the standard, and a larger adjustment is needed. At this time, in-depth communication with the user is needed to understand its true purpose. Needs and expectations prompt users to reset learning goals to ensure the relevance and achievability of learning goals, and then conduct a comprehensive review and reconstruction to adjust the original learning plan and obtain the target learning plan. For example:

学习目标的调整:重新评估和定义学习目标: 当计划得分P很低时,可能是因为原始的学习目标不符合用户的实际需求或能力范围。此时,需要与用户进行深入沟通,了解其真正的需求和期望,提示用户重新设定学习目标,确保目标的相关性和可达成性。Adjustment of learning goals: Re-evaluate and define learning goals: When the plan score P is very low, it may be because the original learning goals do not meet the actual needs or ability range of the users. At this time, it is necessary to conduct in-depth communication with users to understand their real needs and expectations, prompt users to reset learning goals, and ensure that the goals are relevant and attainable.

学习资料的调整:彻底更换学习资料,如果原有的学习资料未能有效支持学习目标或用户对资料反馈负面,需要对学习资料进行彻底的更换,选用更适合用户当前水平和学习目标的材料,并确保材料的多样性和互动性以提高学习兴趣。Adjustment of learning materials: Completely replace learning materials. If the original learning materials fail to effectively support the learning objectives or users have negative feedback on the materials, the learning materials need to be completely replaced and materials more suitable for the user's current level and learning objectives are selected. and ensure that materials are diverse and interactive to enhance learning interest.

学习路径(包括学习方式)和的调整:采用新的学习方法和路径: 计划得分 P 低可能意味着当前的学习方法和路径不适合用户,此时可以考虑提示用户重新选择学习路径(包括学习方式)或者引入新的学习技术,如自适应学习系统、游戏化学习或混合学习模式,以提高学习效率和用户参与度。Adjustment of learning paths (including learning methods) and: Adopt new learning methods and paths: A low plan score P may mean that the current learning methods and paths are not suitable for users. At this time, you can consider prompting users to reselect learning paths (including learning methods). ) or introduce new learning technologies, such as adaptive learning systems, gamified learning or hybrid learning models, to improve learning efficiency and user engagement.

学习成本的调整:重审学习成本: 对于成本过高导致的低计划得分,需要从时间和资源两方面进行调整。例如,可以缩短学习周期、降低学习频率或使用更多免费资源,以减轻用户的负担。Adjustment of learning costs: Re-examination of learning costs: For low plan scores caused by excessive costs, adjustments need to be made in terms of time and resources. For example, the learning cycle can be shortened, the learning frequency can be reduced, or more free resources can be used to reduce the burden on users.

行为建模参数的调整:个性化学习计划: 低计划得分可能是因为学习计划未能很好地适应用户的个性化需求。此时,可以采用更高级的数据分析和机器学习技术,根据用户的学习历史、偏好和反馈来定制个性化的学习路径和内容。Adjustment of behavior modeling parameters: Personalized learning plan: Low plan scores may be because the learning plan does not adapt well to the user's personalized needs. At this point, more advanced data analysis and machine learning techniques can be used to customize personalized learning paths and content based on the user's learning history, preferences, and feedback.

然后,基于上述调整确定新的目标学习计划,并实施调整和持续监控:动态调整和反馈循环,建立一个持续的监控和反馈机制,定期评估学习计划的效果,并根据用户的进展和反馈进行进一步的调整。这种动态调整过程有助于确保学习计划始终符合用户的实际需求,并能够适应其不断变化的学习状态。Then, based on the above adjustments, determine the new target learning plan, implement adjustments and continuous monitoring: Dynamic adjustment and feedback loop, establish a continuous monitoring and feedback mechanism, regularly evaluate the effectiveness of the learning plan, and make further adjustments based on the user's progress and feedback. This dynamic adjustment process helps ensure that the learning plan always meets the user's actual needs and can adapt to their changing learning status.

综上,可以针对用户的第一口才训练数据,进一步调整原始学习计划,生成更加满足用户个性化需求的、更加完善的目标学习计划,不断优化学习计划,以最大化用户的综合满意度和进步。In summary, the original learning plan can be further adjusted based on the user's first eloquence training data to generate a more complete target learning plan that better meets the user's personalized needs, and the learning plan can be continuously optimized to maximize the user's comprehensive satisfaction and progress.

在一种实施方式中,在目标学习计划执行的过程中,获取第二口才训练数据,将第一口才训练数据或者第二口才训练数据作为目标口才训练数据,第一口才训练数据作为目标口才训练数据,有利于后续分析更快速地对用户进行反馈,将第二口才训练数据作为目标口才训练数据,有利于进一步提高反馈的精确度。In one implementation, during the execution of the target learning plan, the second eloquence training data is obtained, the first eloquence training data or the second eloquence training data is used as the target eloquence training data, and the first eloquence training data is used as the target The eloquence training data is conducive to subsequent analysis and faster feedback to users. Using the second eloquence training data as the target eloquence training data is conducive to further improving the accuracy of feedback.

在一种实施方式中,步骤S500中对目标口才训练数据进行情感分析处理,得到情感分析处理结果包括步骤S510-S530:In one implementation, in step S500, sentiment analysis is performed on the target eloquence training data, and obtaining the sentiment analysis processing results includes steps S510-S530:

S510、通过情感分析引擎,对目标口才训练数据进行多模态情感分析,得到文本情感分析结果、声音情感分析结果以及图像情感分析结果。S510. Use the sentiment analysis engine to perform multi-modal sentiment analysis on the target eloquence training data, and obtain text sentiment analysis results, voice sentiment analysis results and image sentiment analysis results.

可选地,系统通过包含NLP组件、声学模型以及计算机视觉技术的情感分析引擎,对目标口才训练数据进行多模态情感分析,得到文本情感分析结果、声音情感分析结果以及图像情感分析结果。Optionally, the system performs multi-modal sentiment analysis on the target eloquence training data through an sentiment analysis engine that includes NLP components, acoustic models, and computer vision technology, and obtains text sentiment analysis results, voice sentiment analysis results, and image sentiment analysis results.

可选地,文本情感分析结果Sentiment(T)使用自然语言处理(NLP)技术,结合了口才维度指标得到,公式可以为:Optionally, the text sentiment analysis resultSentiment (T ) is obtained using natural language processing (NLP) technology and combined with the eloquence dimension index. The formula can be:

Sentiment(T)=Sentiment (T )=

其中,是情感词汇的权重,Si是情感词汇的情感分数,vj是口才维度的口才维度权重,Kj是第j个口才维度的口才评分,可以通过对用户的目标口才训练数据对应的演讲文本进行分词和词性标注,以识别情感词汇和其他关键词,然后使用情感词典对文本中的每个词进行情感评分。in, is the weight of the emotional vocabulary,Si is the emotion score of the emotional vocabulary,vj is the eloquence dimension weight of the eloquence dimension,Kj is the eloquence score of the jth eloquence dimension, we can pass the speech text corresponding to the user's target eloquence training data Perform word segmentation and part-of-speech tagging to identify emotional words and other keywords, and then use an emotional dictionary to give an emotional score to each word in the text.

可选地,声音情感分析结果Sentiment(A)使用情感模型,结合了口才维度指标得到,公式可以为:Optionally, the voice emotion analysis resultSentiment (A ) is obtained using the emotion model and combined with the eloquence dimension index. The formula can be:

Sentiment(A)=Sentiment (A )=

其中,是声音特征(如使用声学模型提取声音特征,包括音调、音量、语速等)的权重,/>是第i个声音特征的分数,/>为声音特征的数量,/>为口才维度的数量,vj是口才维度的口才维度权重,Kj是第j个口才维度的口才评分。in, Is the weight of sound features (such as using an acoustic model to extract sound features, including pitch, volume, speaking speed, etc.),/> is the score of the i-th sound feature,/> is the number of sound features,/> is the number of eloquence dimensions,vj is the eloquence dimension weight of the eloquence dimension, andKj is the eloquence score of the jth eloquence dimension.

可选地,图像情感分析结果Sentiment(I)使用图像分析模型,图像情感分析结合了面部表情和身体语言特征以及口才维度指标得到,公式可以为:Optionally, the image emotion analysis resultSentiment (I) uses an image analysis model. Image emotion analysis combines facial expression and body language features with eloquence dimension indicators. The formula can be:

Sentiment(I)=Sentiment (I )=

其中,是图像特征(包括面部表情,如微笑、愤怒、悲伤等,身体语言特征如姿势、手势、肢体动作等,环境特征:如一个人身处的环境是否愉快、安静或嘈杂等)的权重,/>是第i个声音特征的分数,/>为声音特征的数量,/>为口才维度的数量,vj是口才维度的口才维度权重,Kj是第j个口才维度的口才评分。in, It is the weight of image features (including facial expressions, such as smile, anger, sadness, etc., body language features such as posture, gestures, body movements, etc., environmental features: such as whether the environment a person is in is pleasant, quiet or noisy, etc.),/> is the score of the i-th sound feature,/> is the number of sound features,/> is the number of eloquence dimensions,vj is the eloquence dimension weight of the eloquence dimension, andKj is the eloquence score of the jth eloquence dimension.

在一些实施方式中,可以配置第一权重参数α以及第二权重参数β,计算综合情感分析分数Sentiment_CombinedIn some implementations, the first weight parameterα and thesecond weight parameterβ can be configured to calculate the comprehensive sentiment analysis scoreSentiment_Combined :

Sentiment_Combined=α×(Sentiment(T)+Sentiment(A)+Sentiment(I))+β×(K1+K2+...+Kj)。Sentiment _Combined =α ×(Sentiment (T ) +Sentiment (A ) +Sentiment (I )) +β × (K1 +K2 +...+Kj ).

S520、通过多模态反馈生成器,对文本情感分析结果、声音情感分析结果以及图像情感分析结果进行转化处理,得到情感强度以及情感类型。S520. Use the multi-modal feedback generator to transform text emotion analysis results, voice emotion analysis results and image emotion analysis results to obtain emotion intensity and emotion type.

可选地,系统通过多模态反馈生成器,对文本情感分析结果、声音情感分析结果以及图像情感分析结果进行转化处理,得到情感强度E以及情感类型T’Optionally, the system uses a multi-modal feedback generator to transform text sentiment analysis results, voice sentiment analysis results and image sentiment analysis results to obtain the emotion intensity E and emotion typeT' .

在一种实施方式中,步骤S500中根据情感分析处理结果,确定多模态反馈包括步骤S510-S550:In one implementation, in step S500, based on the sentiment analysis processing results, determining multi-modal feedback includes steps S510-S550:

S510、确定目标口才训练数据的每一口才维度对应的目标口才评分。S510. Determine the target eloquence score corresponding to each eloquence dimension of the target eloquence training data.

需要说明的是,可以基于上述的口才维度对应的口才评分计算公式的原理,计算目标口才训练数据的每一口才维度对应的目标口才评分Dn,即第n个口才维度的目标口才评分,口才维度总数为N。It should be noted that based on the principle of the eloquence score calculation formula corresponding to the eloquence dimension mentioned above, the target eloquence scoreDn corresponding to each eloquence dimension of the target eloquence training data can be calculated, that is, the target eloquence score of the nth eloquence dimension, the eloquence dimension The total number is N.

S520、当情感类型为正向情感时,确定情感类型的目标数值为第一数值,否则确定目标数值为第二数值。S520: When the emotion type is positive emotion, determine the target value of the emotion type as a first value; otherwise, determine the target value as a second value.

例如,当情感类型为正向情感时,例如开心、高兴、兴奋等,确定情感类型的目标数值β2、δ2、ν2、σ2为第一数值,否则确定目标数值β2、δ2、ν2、σ2为第二数值,第一数值大于第二数值。For example, when the emotion type is a positive emotion, such as happiness, joy, excitement, etc., determine the target valuesβ2 ,δ2 ,ν2 ,σ2 of the emotion type as the first value, otherwise determine the target valuesβ2 ,δ2 ,ν 2 andσ 2 are the second values, and the first value is greater than the second value.

S530、根据情感强度,确定多模态反馈对应的情感权重参数。S530. Determine the emotional weight parameters corresponding to the multi-modal feedback according to the emotional intensity.

可选地,可以设置若干个强度范围以及每一强度范围对应的情感权重参数β1的数值,当强度范围越高,情感权重参数的数值越高,即情感强度与情感权重参数呈正相关。Optionally, several intensity ranges and the value of the emotion weight parameterβ 1 corresponding to each intensity range can be set. When the intensity range is higher, the value of the emotion weight parameter is higher, that is, the emotion intensity is positively correlated with the emotion weight parameter.

S540、根据每一目标口才评分、口才评分权重参数、目标数值、情感强度以及情感权重参数进行加权计算,确定多模态反馈的反馈分数。S540. Perform a weighted calculation based on each target eloquence score, eloquence score weight parameter, target value, emotional intensity and emotional weight parameter to determine the feedback score of the multi-modal feedback.

可选地,本申请实施例以多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈为例进行说明,因此多模态反馈的反馈分数包括文本反馈分数、声音反馈分数、视觉反馈分数以及触觉反馈分数,其他实施例中可以包括一个或者多个,不作限定。例如:Optionally, the embodiment of the present application takes multi-modal feedback including text feedback, sound feedback, visual feedback and tactile feedback as an example to illustrate. Therefore, the feedback score of multi-modal feedback includes text feedback score, sound feedback score and visual feedback score. As well as tactile feedback scores, one or more may be included in other embodiments, without limitation. For example:

文本反馈分数Text_Feedback=(αD1+αD2+...+αN×DN)+βE+β2××T’Text feedback scoreText _Feedback =(αD 1+αD 2+...+αN ×DN )+βE +β 2××T'

声音反馈分数Speech_Feedback=(γD1+γD2+...+γN×DN)+δE+δT’Speech feedback scoreSpeech _Feedback =(γD 1+γD 2+...+γN ×DN )+δE +δT'

视觉反馈分数Visual_Feedback=(μD1+μD2+...+μN×DN)+νE+νT’Visual feedback scoreVisual _Feedback =(μD 1+μD 2+...+μN ×DN )+νE +νT'

触觉反馈分数Haptic_Feedback=(ρD1+ρD2+...+ρN×DN)+σE+σT’Haptic feedback scoreHaptic _Feedback =(ρD 1+ρD 2+...+ρN ×DN )+σE +σT'

其中,αiβiγiδiμiνi、、ρiσi是可以根据具体情境进行调整的权重系数。Among them,αi ,βi ,γi ,δi ,μi ,νi ,,ρi ,σi are weight coefficients that can be adjusted according to specific situations.

S550、根据反馈分数,进行文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一,本申请实施例中以包括文本反馈、声音反馈、视觉反馈以及触觉反馈为例,其他实施例中可以包括一个或者多个反馈,而生成的多模态反馈综合在一起,以确保一致性和协调性。同时,系统可以收集用户对反馈的反馈,了解用户对反馈的接受程度和满意度,基于用户的反馈和学习进展,系统可以调整权重系数,以优化用户的学习体验;通过这个更加复杂的设计,系统可以根据多个口才维度和情感分析结果生成多模态反馈,提供更加个性化和高度精确的学习支持,而将口才表达和情感分析融合,为用户提供独特的学习体验。S550. According to the feedback score, at least one of text feedback, voice feedback, visual feedback and tactile feedback is performed. In the embodiment of the present application, text feedback, voice feedback, visual feedback and tactile feedback are taken as an example. In other embodiments, one or more feedbacks may be included, and the generated multimodal feedback is integrated together to ensure consistency and coordination. At the same time, the system can collect user feedback on the feedback, understand the user's acceptance and satisfaction with the feedback, and based on the user's feedback and learning progress, the system can adjust the weight coefficient to optimize the user's learning experience; through this more complex design, the system can generate multimodal feedback based on multiple eloquence dimensions and sentiment analysis results, provide more personalized and highly accurate learning support, and integrate eloquence expression and sentiment analysis to provide users with a unique learning experience.

可选地,S550可以包括S5501-S5502:Optionally, S550 can include S5501-S5502:

S5501、当反馈分数大于反馈阈值时,通过虚拟导师系统反馈正向的文本、反馈正向的语音、反馈正向的图像或者动画、以及反馈第一强度的触觉中的至少之一。S5501. When the feedback score is greater than the feedback threshold, at least one of positive text feedback, positive voice feedback, positive image or animation feedback, and first-intensity tactile feedback is fed back through the virtual tutor system.

可选地,反馈阈值可以基于实际调整,当反馈分数大于反馈阈值时,通过虚拟导师系统反馈正向的文本,例如“你的演讲内容充实度很高,给人留下了深刻的印象!”;反馈正向的语音,例如以开心、高兴的语音:“你的情绪很棒!”;反馈正向的图像或者动画,例如生成一个微笑的人像或者愉悦的动画;通过虚拟导师系统的触觉反馈装置反馈第一强度的触觉,包括物理上的振动或触觉模拟。Optionally, the feedback threshold can be adjusted based on actual conditions. When the feedback score is greater than the feedback threshold, positive text is fed back through the virtual tutor system, such as "The content of your speech is very substantial and leaves a deep impression on people!" ; Feedback with positive voice, such as a happy voice: "Your mood is great!"; Feedback with positive images or animations, such as generating a smiling portrait or pleasant animation; Tactile feedback through the virtual tutor system The device feedbacks the first intensity of tactile sensation, including physical vibration or tactile simulation.

需要说明的是,虚拟导师系统中,可以以全息投影的形式将虚拟导师呈现在用户面前,用户可以看到虚拟导师的三维全息形象,包括虚拟导师的身体语言和面部表情;而用户可以从系统提供的虚拟导师中选择一个,虚拟导师通常具有不同的性格、专业知识和风格,用户可以根据自己的喜好和需求进行选择。通过观察虚拟导师的形象,用户可以获取情感反馈,了解虚拟导师的情感状态和反应。另外,虚拟导师还可以通过语音输出提供情感反馈,意味着虚拟导师可以根据用户的演讲表现和情感情况,通过语音介绍或评论用户的表现,用户可以听取这些语音反馈,以更深入地理解虚拟导师的情感反应和建议。并且,虚拟导师可以通过触觉反馈装置传递物理感觉,以强调或突出重要情感反馈的部分。这样,用户可以通过触觉感受到虚拟导师的情感反应。通过这些多种交互手段,用户可以全面地接收情感反馈,包括视觉、听觉、文本和触觉方面的反馈,这有助于用户更好地理解虚拟导师的情感反应和建议,从而改进自己的演讲表现。It should be noted that in the virtual tutor system, the virtual tutor can be presented to the user in the form of holographic projection. The user can see the three-dimensional holographic image of the virtual tutor, including the body language and facial expressions of the virtual tutor; and the user can view the virtual tutor from the system. Choose one of the provided virtual tutors. Virtual tutors usually have different personalities, expertise and styles. Users can choose according to their preferences and needs. By observing the image of the virtual tutor, users can obtain emotional feedback and understand the emotional state and reaction of the virtual tutor. In addition, the virtual tutor can also provide emotional feedback through voice output, which means that the virtual tutor can introduce or comment on the user's performance through voice based on the user's speech performance and emotional situation. The user can listen to this voice feedback to understand the virtual tutor more deeply. Emotional responses and suggestions. Furthermore, virtual tutors can deliver physical sensations through tactile feedback devices to emphasize or highlight portions of important emotional feedback. In this way, users can feel the emotional response of the virtual tutor through touch. Through these multiple interactive methods, users can receive comprehensive emotional feedback, including visual, auditory, textual and tactile feedback, which helps users better understand the emotional responses and suggestions of the virtual tutor, thereby improving their speech performance .

S5502、当反馈分数小于或等于反馈阈值时,通过虚拟导师系统反馈负向的文本、反馈负向的语音、反馈负向的图像或者动画、以及反馈第二强度的触觉中的至少之一;第一强度大于第二强度。S5502: When the feedback score is less than or equal to the feedback threshold, at least one of negative text feedback, negative voice feedback, negative image or animation feedback, and tactile feedback of a second intensity is fed back through the virtual tutor system; the first intensity is greater than the second intensity.

相反,当反馈分数小于或等于反馈阈值时,通过虚拟导师系统反馈负向的文本、反馈负向的语音、反馈负向的图像或者动画、以及反馈第二强度的触觉中。其中,第一强度大于第二强度。On the contrary, when the feedback score is less than or equal to the feedback threshold, negative text, negative voice feedback, negative image or animation feedback, and second intensity tactile feedback are fed back through the virtual tutor system. Wherein, the first intensity is greater than the second intensity.

在一些实施例中,反馈阈值还可以进行进一步的细分,对应的文本反馈、声音反馈、视觉反馈以及触觉反馈亦可以进行进一步细化。例如,反馈阈值可以分为:In some embodiments, the feedback threshold can be further subdivided, and the corresponding text feedback, audio feedback, visual feedback, and tactile feedback can also be further refined. For example, feedback thresholds can be divided into:

(1)、优秀阈值: 分数大于或等于90,对应反馈:(1). Excellent threshold: score is greater than or equal to 90, corresponding feedback:

文本反馈: "你的表现非常出色!继续保持!"(第一情感强度的正向文本);Text feedback: "You did an excellent job! Keep it up!" (positive text of the first emotional intensity);

声音反馈: 使用鼓励和赞赏的声调(第一声调);Vocal feedback: Use an encouraging and appreciative tone (first tone);

视觉反馈: 显示胜利或欢庆的动画(第一变化度的动画,例如可以事先模拟,为测试者播放不同动画并通过设备检测测试者的心跳脑信号等,根据变化程度的大小确定变化度);Visual feedback: animation showing victory or celebration (the animation of the first degree of change, for example, can be simulated in advance, play different animations for the tester and detect the tester's heartbeat and brain signals through the device, etc., and determine the degree of change according to the degree of change) ;

触觉反馈: 强烈的第一振动强度的正向振动反馈-快速连续振动。Tactile feedback: Positive vibration feedback with strong first vibration intensity - rapid continuous vibration.

(2)、良好阈值: 分数在 80 至 89 之间,对应反馈:(2) Good threshold: The score is between 80 and 89, corresponding feedback:

文本反馈: "做得很好,还有一些小地方可以进一步提升。"(第二情感强度的正向文本,第二情感强度小于第一情感强度);Text feedback: "Well done, there are still some small areas that can be further improved." (positive text of the second emotional intensity, the second emotional intensity is smaller than the first emotional intensity);

声音反馈: 温和鼓励的声调(第二声调,小于第一声调);Vocal feedback: gentle and encouraging tone (second tone, smaller than the first tone);

视觉反馈: 显示微笑或点头的动画(第二变化度的动画,第二变化度小于第一变化度);Visual feedback: Displaying a smile or nod animation (animation of the second degree of change, the second degree of change is smaller than the first degree of change);

触觉反馈: 第二振动强度(小于第一振动强度)的正向振动-间歇性振动。Haptic feedback: positive vibration of the second vibration intensity (less than the first vibration intensity) - intermittent vibration.

(3)、中等阈值: 分数在 70 至 79 之间,对应反馈:(3) Medium threshold: The score is between 70 and 79, and the corresponding feedback is:

文本反馈: "表现不错,但还有提升空间。"(第三情感强度的正向文本,第二情感强度小于第二情感强度)Text feedback: "The performance is good, but there is still room for improvement." (Positive text of the third emotional intensity, the second emotional intensity is smaller than the second emotional intensity)

声音反馈: 平和的声调提出建议(第三声调,小于第二声调);Vocal feedback: Suggestions in a calm tone (third tone, lower than second tone);

视觉反馈: 中性表情的动画(第三变化度的动画,第三变化度小于第二变化度);Visual feedback: animation of neutral expression (animation of the third degree of change, the third degree of change is smaller than the second degree of change);

触觉反馈: 第三振动强度(小于第二振动强度)的振动提醒。Tactile feedback: vibration reminder of the third vibration intensity (less than the second vibration intensity).

(4)、改进阈值: 分数在 60 至 69 之间,对应反馈:(4) Improvement threshold: The score is between 60 and 69, and the corresponding feedback is:

文本反馈: "需要在一些方面进行改进。"(第四情感强度的负向文本)Text feedback: "Needs improvement in some areas." (Fourth level of negative sentiment)

声音反馈: 轻微关切的声调(第四声调,小于第三声调);Vocal feedback: slightly concerned tone (the fourth tone, smaller than the third tone);

视觉反馈: 轻微皱眉或思考的动画(第四变化度的动画,第四变化度小于第三变化度);Visual feedback: Slightly frowning or thinking animation (animation of the fourth degree of variation, the fourth degree of variation is smaller than the third degree of variation);

触觉反馈: 第四振动强度(小于第三振动强度)的缓慢的单次振动提示。Tactile feedback: A slow single vibration prompt at the fourth vibration intensity (less than the third vibration intensity).

(5)、需关注阈值: 分数小于 60,对应反馈:(5). Threshold that requires attention: if the score is less than 60, the corresponding feedback is:

文本反馈: "有几个关键点需要特别关注和改进。"(第五情感强度的负向文本,第五情感强度大于第四情感强度);Text feedback: "There are several key points that require special attention and improvement." (negative text of the fifth emotional intensity, the fifth emotional intensity is greater than the fourth emotional intensity);

声音反馈: 关切且认真的声调(第五声调,小于第四声调);Voice feedback: concerned and serious tone (fifth tone, lower than fourth tone);

视觉反馈: 显示需要关注的标志或动画(第五变化度的动画,第五变化度小于第四变化度);Visual feedback: Display a sign or animation that needs attention (animation of the fifth degree of change, the fifth degree of change is less than the fourth degree of change);

触觉反馈: 第五振动强度(小于第四振动强度)的重复的慢振动提示,表示需要注意。Tactile feedback: Repeated slow vibration prompts at the fifth vibration intensity (less than the fourth vibration intensity) indicate the need for attention.

需要说明的是,在实际应用中,反馈的策略和内容除了基于上述分数范围和阈值,还可以考虑用户的个人偏好、历史反馈接受情况以及学习环境等因素进行调整。系统可以根据用户对反馈的实际反应(如用户满意度调查、学习进度的改善情况等)来动态调整反馈策略和内容,以实现最佳的教学效果。通过这种方式,可以将计算出的分数转化为具体的、个性化的反馈内容和策略,从而为用户提供更加有效和针对性的学习支持。It should be noted that in actual applications, in addition to being based on the above score range and threshold, the feedback strategy and content can also be adjusted by taking into account the user's personal preferences, historical feedback acceptance, learning environment and other factors. The system can dynamically adjust feedback strategies and content based on users' actual responses to feedback (such as user satisfaction surveys, improvements in learning progress, etc.) to achieve the best teaching effect. In this way, the calculated scores can be transformed into specific, personalized feedback content and strategies, thereby providing users with more effective and targeted learning support.

可选地,虚拟导师系统的互动界面还包括一系列功能,例如用户可以要求系统提供更多详细的反馈、提问有关演讲技巧的问题、进行练习演讲等。用户可以与系统进行实时对话,并根据自己的需求进行互动。用户的演讲结束后,系统将提供总结和评估报告,以展示用户在不同口才维度上的表现,这有助于用户了解自己的演讲强项和需要改进的方面。而用户可以选择继续学习,系统将根据他们的表现和需求,调整个性化学习路径和练习计划,以帮助他们不断提高演讲技巧。Optionally, the interactive interface of the virtual tutor system also includes a series of functions. For example, users can ask the system to provide more detailed feedback, ask questions about speaking skills, conduct practice speeches, etc. Users can have real-time conversations with the system and interact according to their needs. After the user's speech is over, the system will provide a summary and evaluation report to show the user's performance in different eloquence dimensions, which helps users understand their speech strengths and areas that need improvement. Users can choose to continue learning, and the system will adjust personalized learning paths and practice plans based on their performance and needs to help them continuously improve their speaking skills.

在一种实施方式中,本申请实施例的口才训练的多模态反馈方法,还可以包括步骤S610-S640:In one implementation, the multimodal feedback method for eloquence training in the embodiment of the present application may further include steps S610-S640:

S610、对目标口才训练数据进行特征提取,得到若干个口才维度指标以及时间序列口才维度指标。S610, extracting features from the target eloquence training data to obtain a number of eloquence dimension indicators and time series eloquence dimension indicators.

可选地,通过特征提取模型对目标口才训练数据进行特征提取,得到若干个口才维度指标S1,S2,S3,…,Sm(例如自信度、语速、情感状态、面部表情等多个维度)以及时间序列口才维度指标S1,t,S2,t,S3,t,…,Sm,t,即在口才维度指标的基础上还包括对应的时间点t。Optionally, perform feature extraction on the target eloquence training data through a feature extraction model to obtain several eloquence dimension indicatorsS 1,S 2,S 3,...,Sm (such as confidence, speaking speed, emotional state, facial expression, etc.) dimensions) and the time series eloquence dimension indicatorsS 1,t ,S 2,t ,S 3,t ,…,Sm ,t , that is, on the basis of the eloquence dimension indicators, it also includes the corresponding time point t.

S620、根据口才维度指标以及预设权重,确定第一情感感知指标以及根据时间序列口才维度指标以及预设权重,确定第二情感感知指标。S620. Determine the first emotion perception index according to the eloquence dimension index and the preset weight, and determine the second emotion perception index according to the time series eloquence dimension index and the preset weight.

可选地,定义第一情境感知指标,表示用户的演讲情境(不同情境不同的权重),其计算公式如下:Optionally, define a first situational awareness indicator , represents the user's speech context (different contexts have different weights), and its calculation formula is as follows:

可选地,可以使用滑动窗口方法来计算一段时间内的时间序列口才维度指标即第二情感感知指标C2,其计算公式如下:Optionally, the sliding window method can be used to calculate the time series eloquence dimension index, that is, the second emotion perception indexC 2 within a period of time. The calculation formula is as follows:

其中,N表示时间窗口的大小,通过调整窗口大小,可以捕捉不同时间段内的情境变化,为预设权重。需要说明的是,根据用户的反馈、学习进展以及系统的性能表现,使用机器学习算法来动态调整预设权重。Among them,N represents the size of the time window. By adjusting the window size, situation changes in different time periods can be captured. is the default weight. It should be noted that machine learning algorithms are used to dynamically adjust the preset weights based on user feedback, learning progress, and system performance.

可选地,还可以引入深度学习模型来优化情境感知的准确性。例如,可以构建一个多层神经网络模型,将口才维度指标作为输入特征,通过非线性变换和层级抽象,学习更深层次的情境模式,假设有一个深度神经网络模型F(⋅),其输入为口才维度的时间序列数据{S1,t,S2,t,…,Sm,t},输出为一个连续的情境感知评分C3:Optionally, deep learning models can also be introduced to optimize the accuracy of situational awareness. For example, a multi-layer neural network model can be constructed, using eloquence dimension indicators as input features, and learning deeper situational patterns through nonlinear transformation and hierarchical abstraction. Suppose there is a deep neural network modelF (⋅) whose input is eloquence. Dimensional time series data {S 1,t ,S 2,t ,…,Sm ,t }, the output is a continuous situation awareness scoreC 3:

C3=F({S1,t,S2,t,…,Sm,t})C 3=F ({S 1,t ,S 2,t ,…,Sm ,t })

该模型可以通过反向传播算法训练,以最小化预测的情境感知评分与实际反馈之间的误差,从而实现对用户演讲情境的精准识别。The model can be trained through a back-propagation algorithm to minimize the error between the predicted context-aware score and the actual feedback, thereby achieving accurate recognition of the user's speech context.

S630、根据第一情感感知指标和/或第二情感感知指标,确定情绪反馈建议。S630. Determine emotional feedback suggestions based on the first emotion perception index and/or the second emotion perception index.

例如,如果第一情感感知指标小于指标阈值,和/或第二情感感知指标小于指标阈值,说明自信度、语速、情感状态、面部表情等多个维度可能存在不足,此时确定对应的鼓励和建议。例如,第一情感感知指标小于指标阈值,则确定第一情感感知指标中小于计算分数阈值或者最小的,例如可能为情感状态的/>,此时确定改善情感状态的情绪反馈建议,或者调整互动策略,以适应新的情境;同理,确定第二情感感知指标中小于计算分数阈值或者最小的/>,确定对应的情绪反馈建议。For example, if the first emotion perception indicator is less than the indicator threshold, and/or the second emotion perception indicator is less than the indicator threshold, it indicates that there may be deficiencies in multiple dimensions such as confidence, speaking speed, emotional state, facial expression, etc. At this time, the corresponding encouragement is determined and suggestions. For example, if the first emotion perception index is less than the index threshold, then it is determined that the first emotion perception index is less than the calculated score threshold or the smallest , for example, it may be an emotional state/> , at this time, determine the emotional feedback suggestions to improve the emotional state, or adjust the interaction strategy to adapt to the new situation; similarly, determine the second emotion perception index that is less than the calculated score threshold or the minimum/> , determine the corresponding emotional feedback suggestions.

S640、实时生成情绪反馈建议,或者,利用强化学习方法,根据情绪反馈建议以及奖励函数,确定目标情绪反馈建议并生成目标情绪反馈建议。S640. Generate emotional feedback suggestions in real time, or use reinforcement learning methods to determine target emotional feedback suggestions and generate target emotional feedback suggestions based on the emotional feedback suggestions and reward functions.

可选地,在确定情绪反馈建议后,在虚拟导师系统中实时生成情绪反馈建议,例如以语音的形式或者以文字的形式在页面中进行显示。在一些实施方式中,也可以进一步利用强化学习方法,定义状态空间SS为所有可能的情境感知结果集合,动作空间A为系统可执行的情绪反馈建议集合,奖励函数R(s,a)表示在状态s下采取动作a(情绪反馈建议)后获得的即时奖励,通过Q-learning或深度Q-network (DQN) 等强化学习方法,系统可以在不同情境下自我迭代优化,找到最佳的互动策略,即目标情绪反馈建议。然后,生成目标情绪反馈建议。综上,本方案结合了多元口才维度分析、时间序列建模、深度学习以及强化学习等先进技术,形成了一个既能全面理解用户演讲情境,又能灵活提供个性化实时反馈的情境感知与实时互动模块,体现了其独创性和先进性。Optionally, after the emotional feedback suggestions are determined, the emotional feedback suggestions are generated in real time in the virtual tutor system, for example, displayed on the page in the form of voice or text. In some implementations, the reinforcement learning method can also be further used to define the state space SS as the set of all possible situation awareness results, the action space A as the set of executable emotional feedback suggestions by the system, and the reward functionR (s ,a ) is expressed in The immediate reward obtained after taking actiona (emotional feedback suggestion) in states . Through reinforcement learning methods such as Q-learning or deep Q-network (DQN), the system can iteratively optimize itself in different situations to find the best interaction strategy. , that is, target emotion feedback suggestions. Then, target emotion feedback suggestions are generated. In summary, this solution combines advanced technologies such as multi-dimensional eloquence analysis, time series modeling, deep learning, and reinforcement learning to form a situation-aware and real-time system that can not only fully understand the user's speech context, but also flexibly provide personalized real-time feedback. The interactive module reflects its originality and advancement.

在一种实施方式中,本申请实施例的口才训练的多模态反馈方法,还可以包括步骤S710-S720:In one implementation, the multimodal feedback method for eloquence training in the embodiment of the present application may further include steps S710-S720:

S710、计算目标口才训练数据的流利度评分以及自信度评分。S710. Calculate the fluency score and confidence score of the target eloquence training data.

可选地,流利度评分计算公式为:Optionally, a fluency score The calculation formula is:

其中,表示目标口才训练数据中的语音段落数,/>表示第i个语音段落的静默时长,表示/>第i个语音段落的总时长。/>越高表示演讲越流利。in, Indicates the number of speech paragraphs in the target eloquence training data,/> Represents the silence duration of the i-th speech paragraph, indicating/> The total duration of the i-th speech paragraph. /> Higher levels indicate more fluent speech.

自信度评分计算公式为:confidence score The calculation formula is:

其中,表示第j个语音段落的语速,/>表示整个目标口才训练数据的平均语速,/>越高表示演讲越自信。in, Represents the speaking speed of the j-th speech paragraph,/> Represents the average speaking speed of the entire target eloquence training data,/> The higher it is, the more confident the speech is.

可选地,目标口才训练数据的流利度评分以及自信度评分可以保存在用户的口才维度指标档案中,系统持续监测用户的演讲表现,不断更新口才维度指标档案。Optionally, the fluency score and confidence score of the target eloquence training data can be saved in the user's eloquence dimension index file. The system continuously monitors the user's speech performance and continuously updates the eloquence dimension index file.

S720、根据流利度评分以及自信度评分,通过虚拟导师系统输出改进建议和/或优势。S720. Based on the fluency score and the confidence score, output improvement suggestions and/or advantages through the virtual tutor system.

可选地,如果流利度评分大于或等于评分阈值,此时可以通过虚拟导师系统输出优势为流利度,使用户知道优势并继续保持,或者如果流利度评分小于评分阈值,此时可以通过虚拟导师系统输出改进建议,如何进行改进以提高流利度。进一步,在上述基础上,如果自信度评分大于或等于评分阈值,此时可以通过虚拟导师系统输出优势为自信度,使用户知道优势并继续保持,或者如果自信度评分小于评分阈值,此时可以通过虚拟导师系统输出改进建议,如何进行改进以提高自信度。Optionally, if the fluency score is greater than or equal to the scoring threshold, the advantage can be output as fluency through the virtual tutor system at this time, so that the user knows the advantage and continues to maintain it, or if the fluency score is less than the scoring threshold, the virtual tutor can The system outputs improvement suggestions on how to make improvements to improve fluency. Furthermore, based on the above, if the confidence score is greater than or equal to the scoring threshold, the advantage can be output as a confidence level through the virtual tutor system so that the user can know the advantage and continue to maintain it, or if the confidence score is less than the scoring threshold, the advantage can be output at this time. Output improvement suggestions through the virtual tutor system and how to make improvements to increase confidence.

可选地,本申请实施例的系统还提供虚拟导师社区,用户可以访问虚拟导师社区,与其他用户互动、分享经验和参与竞赛:Optionally, the system of the embodiment of this application also provides a virtual tutor community. Users can access the virtual tutor community to interact with other users, share experiences and participate in competitions:

1、登录社区:用户启动虚拟导师系统后,可以选择访问虚拟导师社区。如果用户已经登录系统,他们可以直接进入社区。如果尚未登录,系统将要求他们提供用户名和密码或使用其他身份验证方式进行登录。1. Login to the community: After the user launches the virtual tutor system, they can choose to access the virtual tutor community. If the user is already logged in to the system, they can directly enter the community. If not logged in, the system will ask them to provide a username and password or use other authentication methods to log in.

2、社区浏览:一旦用户登录虚拟导师社区,他们可以浏览不同的社区板块和主题。这些板块通常涵盖各种演讲技巧、口才训练、演讲经验分享、竞赛讨论等。用户可以选择感兴趣的主题,浏览相关帖子和讨论。2. Community browsing: Once users log in to the virtual tutor community, they can browse different community sections and topics. These sections usually cover various speech skills, eloquence training, speech experience sharing, competition discussions, etc. Users can select topics of interest and browse related posts and discussions.

3、互动和分享:用户可以与其他社区成员互动,包括发表评论、点赞、分享自己的演讲经验、提问问题或回答其他用户的问题。这种互动有助于用户与其他人建立联系、分享知识和获取反馈。3. Interaction and sharing: Users can interact with other community members, including commenting, liking, sharing their speech experience, asking questions or answering other users' questions. This interaction helps users connect with others, share knowledge and get feedback.

4、参与竞赛:虚拟导师社区还包括演讲竞赛板块,用户可以参加各种演讲竞赛,展示自己的演讲技能并与其他用户竞争,这些竞赛可以根据不同的口才维度和主题进行分类,以便用户选择适合自己的竞赛。4. Participate in competitions: The virtual tutor community also includes a speech contest section. Users can participate in various speech contests to showcase their speech skills and compete with other users. These competitions can be classified according to different eloquence dimensions and topics, so that users can choose the most suitable ones. own competition.

5、学习资源:虚拟导师社区还提供丰富的学习资源,包括教程、演讲范例、专业建议和学习材料。用户可以浏览这些资源,获取有关口才维度的知识和技巧。5. Learning resources: The virtual tutor community also provides a wealth of learning resources, including tutorials, speech examples, professional advice and learning materials. Users can browse these resources to gain knowledge and skills on the dimensions of eloquence.

6、建立联系:用户可以通过社区与其他演讲爱好者建立联系,建立朋友关系、学习伙伴或合作伙伴。这有助于用户扩展人际网络,并共同进步。6. Establish connections: Users can establish connections with other speech enthusiasts through the community to establish friendships, learning partners or partners. This helps users expand their network and make progress together.

7、社区管理:虚拟导师社区通常由管理员和版主进行管理,以确保内容的质量和秩序,管理员会监控不适当的内容,并采取适当的措施来维护社区的良好氛围。7. Community management: Virtual tutor communities are usually managed by administrators and moderators to ensure the quality and order of content. Administrators will monitor inappropriate content and take appropriate measures to maintain a good atmosphere in the community.

通过虚拟导师社区,用户可以参与互动、分享经验、获取建议、参与竞赛和扩展知识,从而更好地提高他们的演讲技能和口才表达能力。这个过程有助于用户在口才领域不断成长,并与其他演讲爱好者建立联系。其中,进入虚拟导师社区前,使用身份验证和用户账户管理系统,确保虚拟导师社区中的用户是合法的注册用户。Through a virtual mentor community, users can interact, share experiences, get advice, participate in competitions, and expand their knowledge to better improve their presentation skills and eloquence. This process helps users grow in the field of eloquence and connect with other speaking enthusiasts. Among them, before entering the virtual tutor community, use the identity verification and user account management system to ensure that users in the virtual tutor community are legitimate registered users.

可选地,本申请实施例的系统还提供个性化学习资源库,提供口才表达领域的学习资源,包括视频教程、文章、实例演讲和练习材料。具体地:Optionally, the system of the embodiment of the present application also provides a personalized learning resource library, which provides learning resources in the field of eloquence, including video tutorials, articles, example speeches, and practice materials. specifically:

1、登录资源库:用户在虚拟导师系统中可以访问个性化学习资源库,可以在系统中选择进入资源库板块。1. Log in to the resource library: Users can access the personalized learning resource library in the virtual tutor system, and can choose to enter the resource library section in the system.

2、检索资源:一旦进入资源库,用户可以使用搜索功能或浏览不同的学习资源类别,例如视频教程、文章、实例演讲和练习材料。用户可以根据自己的学习需求和口才维度指标进行资源的检索。2. Search resources: Once in the resource library, users can use the search function or browse different learning resource categories, such as video tutorials, articles, example lectures and exercise materials. Users can search for resources based on their own learning needs and eloquence dimension indicators.

3、个性化推荐:系统还可以根据用户的口才维度评估结果和学习目标,通过推荐算法提供个性化的学习资源推荐。这些推荐资源将针对用户的弱点和需求,以帮助他们提高演讲表现。3. Personalized recommendation: The system can also provide personalized learning resource recommendations through the recommendation algorithm based on the user's eloquence dimension evaluation results and learning goals. These recommended resources will target users' weaknesses and needs to help them improve their speaking performance.

4、资源浏览:用户可以点击所选资源以查看详细信息。例如,如果他们选择观看视频教程,系统将提供视频的描述、时长、作者等信息。用户可以选择观看或下载资源。4. Resource browsing: Users can click on the selected resource to view detailed information. For example, if they choose to watch a video tutorial, the system will provide information such as the video's description, duration, author, and more. Users can choose to watch or download the resource.

5、学习和练习:用户可以根据自己的学习计划选择学习和练习资源。例如,他们可以观看教程视频、阅读文章、参考实例演讲或下载练习材料。这些资源将帮助用户提高口才表达能力和演讲技巧。5. Learning and practice: Users can choose learning and practice resources according to their own learning plans. For example, they can watch tutorial videos, read articles, refer to example lectures or download practice materials. These resources will help users improve their eloquence and presentation skills.

6、学习记录:系统将跟踪用户的学习进度和活动,包括已观看的视频、已阅读的文章、已完成的练习等。用户可以随时查看他们的学习记录,以了解自己的进展。6. Learning records: The system will track the user's learning progress and activities, including videos watched, articles read, exercises completed, etc. Users can check their learning records at any time to understand their progress.

通过个性化学习资源库,用户可以方便地访问口才表达领域的学习资源,根据自己的需求和口才维度指标选择合适的资源,并以个性化的方式提高他们的口才和演讲能力。这个过程有助于用户在口才领域不断学习和进步。Through the personalized learning resource library, users can easily access learning resources in the field of eloquence expression, select appropriate resources according to their own needs and eloquence dimension indicators, and improve their eloquence and speaking skills in a personalized way. This process helps users continue to learn and improve in the field of eloquence.

可选地,本申请实施例的系统还提供口才维度指标跟踪, 用户可以追踪和比较自己的口才维度指标与其他社区成员,以评估自己的进步。具体地:Optionally, the system of the embodiment of the present application also provides eloquence dimension indicator tracking. Users can track and compare their own eloquence dimension indicators with other community members to evaluate their own progress. specifically:

1、访问口才维度指标跟踪界面:用户可以登录虚拟导师社区,并选择进入“口才维度指标跟踪”板块。这个板块提供了用户口才维度指标的跟踪和比较功能。1. Access the Eloquence Dimension Indicator Tracking Interface: Users can log in to the virtual tutor community and choose to enter the "Eloquence Dimension Indicator Tracking" section. This section provides tracking and comparison functions for user eloquence dimension indicators.

2、查看个人口才维度指标:用户可以查看他们个人的口才维度指标档案,其中包括流利度、自信度、语言表达能力、姿势身体语言等多个口才维度指标的历史数据。这些数据展示了用户在不同时间段内口才表现的变化。2. View personal eloquence dimension indicators: Users can view their personal eloquence dimension index files, which include historical data of multiple eloquence dimension indicators such as fluency, confidence, language expression ability, posture and body language, etc. These data show changes in users' eloquence performance over different time periods.

3、比较与其他社区成员:用户可以选择将自己的口才维度指标与其他社区成员进行比较。他们可以选择特定成员,并查看这些成员的口才维度指标数据。这有助于用户了解自己在口才表达领域中的相对表现。3. Compare with other community members: Users can choose to compare their eloquence dimension indicators with other community members. They can select specific members and view the Eloquence dimension metric data for those members. This helps users understand their relative performance in areas of eloquent expression.

4、设定学习目标:用户可以基于口才维度指标的历史数据和比较结果,设定个人的口才学习目标。他们可以确定需要改进的口才维度,并制定相应的学习计划。4. Set learning goals: Users can set personal eloquence learning goals based on the historical data and comparison results of eloquence dimension indicators. They can determine the eloquence dimensions that need to be improved and develop corresponding learning plans.

5、跟踪进展:用户可以定期返回口才维度指标跟踪界面,以查看他们的口才维度指标在学习和练习过程中的进展。他们可以使用图表和图形来可视化地展示自己的进步。5. Track progress: Users can return to the eloquence dimension indicator tracking interface regularly to view the progress of their eloquence dimension indicators in the learning and practice process. They can use charts and graphs to visually demonstrate their progress.

通过口才维度指标跟踪功能,用户可以更好地了解自己在口才表达领域的发展情况,并与其他社区成员进行比较,从而评估自己的进步并设定学习目标。这有助于用户在口才表达方面不断提高和成长。Through the eloquence dimension indicator tracking function, users can better understand their development in the area of eloquence expression and compare with other community members to evaluate their progress and set learning goals. This helps users continue to improve and grow in their eloquence.

通过本申请实施例的方法,至少能够达到效果:By using the method of the embodiment of the present application, at least the following effects can be achieved:

1. 精确的口才评估和反馈:通过引入复杂的口才数学运算公式和多模态反馈生成技术,实现了对学习者口才表达能力的精确评估。与传统方法相比,它能够更准确地识别和量化演讲中的弱点和改进空间,为学习者提供高质量的反馈和改进建议。1. Accurate eloquence assessment and feedback: By introducing complex eloquence mathematical operation formulas and multi-modal feedback generation technology, accurate assessment of learners’ eloquence expression ability is achieved. Compared with traditional methods, it can more accurately identify and quantify weaknesses and room for improvement in speeches, providing learners with high-quality feedback and improvement suggestions.

2. 个性化学习路径:利用口才维度评估结果和用户设定的学习目标,创建个性化的学习计划,这种个性化计划可以更好地满足不同学习者的需求,帮助他们在口才表达领域取得更大的进步;2. Personalized learning path: Use the eloquence dimension assessment results and user-set learning goals to create a personalized learning plan. This personalized plan can better meet the needs of different learners and help them achieve success in the field of eloquence expression. greater progress;

3. 多模态情感分析:通过多模态反馈生成技术,不仅对文本进行情感分析,还包括声音和图像等多种模态,这一综合的情感分析有助于学习者更全面地了解自己的演讲表现,并改进演讲技巧。3. Multi-modal emotional analysis: Through multi-modal feedback generation technology, emotional analysis is performed not only on text, but also on multiple modalities such as sounds and images. This comprehensive emotional analysis helps learners understand themselves more comprehensively speech performance and improve speech skills.

4. 实时互动和反馈:实时的互动界面和虚拟导师社区使学习者能够随时随地与系统和其他学习者互动。这种实时互动有助于提高学习效率,及时解决学习问题,并获得更多反馈和建议。4. Real-time interaction and feedback: Real-time interactive interface and virtual tutor community enable learners to interact with the system and other learners anytime and anywhere. This real-time interaction helps improve learning efficiency, solve learning problems in a timely manner, and get more feedback and suggestions.

5. 更高的学习效率和质量:综合运用这些技术和功能,提供了一种更高效、更个性化、更全面的口才培训方法。学习者可以更快地提升口才表达能力,同时提高学习的乐趣和参与度。5. Higher learning efficiency and quality: The comprehensive use of these technologies and functions provides a more efficient, more personalized, and more comprehensive eloquence training method. Learners can improve their eloquence and expression skills faster while increasing their learning fun and participation.

综上,本申请在提高口才培训的精确性、个性化程度和全面性方面取得了显著的有益效果,通过引入复杂的口才数学运算公式和多模态反馈技术,它为口才培训领域带来了一种创新和高效的解决方案,有望提高学习者的演讲和表达技巧,从而在多个领域产生积极影响。In summary, this application has achieved significant beneficial effects in improving the accuracy, personalization and comprehensiveness of eloquence training. By introducing complex eloquence mathematical operation formulas and multi-modal feedback technology, it has brought great benefits to the field of eloquence training. An innovative and efficient solution that promises to improve learners' speech and presentation skills, thereby making a positive impact in multiple areas.

参照图2,示出了本申请一实施例的口才训练的多模态反馈装置的结构框图,该装置可以包括:Referring to Figure 2, a structural block diagram of a multi-modal feedback device for eloquence training according to an embodiment of the present application is shown. The device may include:

第一获取模块,用于获取输入信息,根据输入信息,生成原始学习计划;A first acquisition module is used to acquire input information and generate an original learning plan according to the input information;

第二获取模块,用于在原始学习计划执行的过程中,获取第一口才训练数据;The second acquisition module is used to acquire the first eloquence training data during the execution of the original learning plan;

调整模块,用于对第一口才训练数据进行若干个口才维度的分析,得到每一口才维度对应的口才评分,根据口才评分以及输入信息,对原始学习计划进行调整,得到目标学习计划;The adjustment module is used to analyze several eloquence dimensions of the first eloquence training data, obtain the eloquence score corresponding to each eloquence dimension, and adjust the original learning plan based on the eloquence score and input information to obtain the target learning plan;

第三获取模块,用于在目标学习计划执行的过程中,获取第二口才训练数据,将第一口才训练数据或者第二口才训练数据作为目标口才训练数据;The third acquisition module is used to acquire the second eloquence training data during the execution of the target learning plan, and use the first eloquence training data or the second eloquence training data as the target eloquence training data;

反馈模块,用于对目标口才训练数据进行情感分析处理,得到情感分析处理结果,并根据情感分析处理结果,确定多模态反馈,多模态反馈包括文本反馈、声音反馈、视觉反馈以及触觉反馈中的至少之一。The feedback module is used to perform emotional analysis and processing on the target eloquence training data, obtain the emotional analysis and processing results, and determine multi-modal feedback based on the emotional analysis and processing results. Multi-modal feedback includes text feedback, sound feedback, visual feedback and tactile feedback. at least one of them.

在一种实施方式中,反馈模块还用于:In one implementation, the feedback module is also used to:

对目标口才训练数据进行特征提取,得到若干个口才维度指标以及时间序列口才维度指标;Perform feature extraction on the target eloquence training data to obtain several eloquence dimension indicators and time series eloquence dimension indicators;

根据口才维度指标以及预设权重,确定第一情感感知指标以及根据时间序列口才维度指标以及预设权重,确定第二情感感知指标;Determine a first emotion perception index based on the eloquence dimension index and the preset weight, and determine a second emotion perception index based on the time series eloquence dimension index and the preset weight;

根据第一情感感知指标和/或第二情感感知指标,确定情绪反馈建议;Determine emotional feedback suggestions based on the first emotion perception indicator and/or the second emotion perception indicator;

实时生成情绪反馈建议,或者,利用强化学习方法,根据情绪反馈建议以及奖励函数,确定目标情绪反馈建议并生成目标情绪反馈建议;Generate emotional feedback suggestions in real time, or use reinforcement learning methods to determine target emotional feedback suggestions and generate target emotional feedback suggestions based on emotional feedback suggestions and reward functions;

或者,or,

计算目标口才训练数据的流利度评分以及自信度评分;Calculate the fluency score and confidence score of the target eloquence training data;

根据流利度评分以及自信度评分,通过虚拟导师系统输出改进建议和/或优势。Based on the fluency score and the confidence score, the virtual tutor system outputs improvement suggestions and/or strengths.

本申请实施例各装置中的各模块的功能可以参见上述方法中的对应描述,在此不再赘述。For the functions of each module in each device in the embodiment of this application, please refer to the corresponding description in the above method, and will not be described again here.

参照图3,示出了本申请一实施例电子设备的结构框图,该电子设备包括:存储器310和处理器320,存储器310内存储有可在处理器320上运行的指令,处理器320加载并执行该指令实现上述实施例中的口才训练的多模态反馈方法。其中,存储器310和处理器320的数量可以为一个或多个。Referring to Figure 3, a structural block diagram of an electronic device according to an embodiment of the present application is shown. The electronic device includes: a memory 310 and a processor 320. The memory 310 stores instructions that can be run on the processor 320. The processor 320 loads and Execute this instruction to implement the multi-modal feedback method of eloquence training in the above embodiment. The number of memories 310 and processors 320 may be one or more.

在一种实施方式中,电子设备还包括通信接口330,用于与外界设备进行通信,进行数据交互传输。如果存储器310、处理器320和通信接口330独立实现,则存储器310、处理器320和通信接口330可以通过总线相互连接并完成相互间的通信。该总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(PeripheralComponent Interconnect ,PCI)总线或扩展工业标准体系结构(Extended IndustryStandard Architecture ,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图3中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。In one implementation, the electronic device also includes a communication interface 330, which is used to communicate with external devices for interactive data transmission. If the memory 310, the processor 320 and the communication interface 330 are implemented independently, the memory 310, the processor 320 and the communication interface 330 can be connected to each other through a bus and complete communication with each other. The bus can be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc. The bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 3, but it does not mean that there is only one bus or one type of bus.

可选的,在具体实现上,如果存储器310、处理器320及通信接口330集成在一块芯片上,则存储器310、处理器320及通信接口330可以通过内部接口完成相互间的通信。Optionally, in a specific implementation, if the memory 310, the processor 320 and the communication interface 330 are integrated on a chip, the memory 310, the processor 320 and the communication interface 330 can communicate with each other through an internal interface.

本申请实施例提供了一种计算机可读存储介质,其存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的口才训练的多模态反馈方法。Embodiments of the present application provide a computer-readable storage medium that stores a computer program. When the computer program is executed by a processor, the multi-modal feedback method for eloquence training provided in the above embodiments is implemented.

本申请实施例还提供了一种芯片,该芯片包括,包括处理器,用于从存储器中调用并运行存储器中存储的指令,使得安装有芯片的通信设备执行本申请实施例提供的方法。An embodiment of the present application also provides a chip, which includes a processor for calling and executing instructions stored in the memory from the memory, so that a communication device equipped with the chip executes the method provided by the embodiment of the present application.

本申请实施例还提供了一种芯片,包括:输入接口、输出接口、处理器和存储器,输入接口、输出接口、处理器以及存储器之间通过内部连接通路相连,处理器用于执行存储器中的代码,当代码被执行时,处理器用于执行申请实施例提供的方法。Embodiments of the present application also provide a chip, including: an input interface, an output interface, a processor and a memory. The input interface, the output interface, the processor and the memory are connected through an internal connection path, and the processor is used to execute the code in the memory. , when the code is executed, the processor is used to execute the method provided by the application embodiment.

应理解的是,上述处理器可以是中央处理器(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(fieldprogrammablegate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。值得说明的是,处理器可以是支持进阶精简指令集机器(advanced RISC machines,ARM)架构的处理器。It should be understood that the above-mentioned processor may be a central processing unit (CPU), or other general-purpose processor, digital signal processing (DSP), or application specific integrated circuit. ASIC), field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor can be a microprocessor or any conventional processor, etc. It is worth noting that the processor may be a processor that supports advanced reduced instruction set machines (ARM) architecture.

进一步地,可选的,上述存储器可包括只读存储器和随机存取存储器,还可以包括非易失性随机存取存储器。该存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以包括只读存储器(read-onlymemory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以包括随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但非限制性说明,许多形式的RAM可用。例如,静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic random access memory ,DRAM) 、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhancedSDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。Further, optionally, the above-mentioned memory may include a read-only memory and a random access memory, and may also include a non-volatile random access memory. The memory may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories. Among them, the non-volatile memory may include a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may include a random access memory (RAM), which is used as an external cache. By way of exemplary but non-limiting description, many forms of RAM are available. For example, static RAM (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link DRAM (SLDRAM) and direct rambus RAM (DR RAM).

在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When computer program instructions are loaded and executed on a computer, processes or functions according to the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device. Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium.

在本说明书的描述中,参考术语“一个实施例”“一些实施例”“示例”“具体示例”或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包括于本申请的至少一个实施例或示例中。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples" or the like means that specific features, structures, materials are described in connection with the embodiment or examples. Or features are included in at least one embodiment or example of the present application. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine different embodiments or examples and features of different embodiments or examples described in this specification unless they are inconsistent with each other.

此外,术语“第一”“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”“第二”的特征可以明示或隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms “first” and “second” are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, features defined by "first" and "second" may explicitly or implicitly include at least one of these features. In the description of this application, "plurality" means two or more than two, unless otherwise explicitly and specifically limited.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分。并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能。Any process or method description in the flow chart or otherwise described herein can be understood to represent a module, fragment or portion of a code including one or more executable instructions for implementing the steps of a specific logical function or process. And the scope of the preferred embodiment of the present application includes other implementations, in which the functions may not be performed in the order shown or discussed, including in a substantially simultaneous manner or in a reverse order according to the functions involved.

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered a sequenced list of executable instructions for implementing the logical functions, and may be embodied in any computer-readable medium, For use with or in combination with instruction execution systems, devices or devices (such as computer-based systems, systems including processors or other systems that can fetch instructions from and execute instructions from the instruction execution system, device or device) or equipment.

应理解的是,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。上述实施例方法的全部或部分步骤是可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。It should be understood that various parts of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the method in the above embodiment can be completed by instructing relevant hardware through a program. The program can be stored in a computer-readable storage medium. When executed, the program includes one of the steps of the method embodiment or other steps. combination.

此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。上述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读存储介质中。该存储介质可以是只读存储器,磁盘或光盘等。In addition, each functional unit in each embodiment of the present application can be integrated into a processing module, or each unit can exist physically separately, or two or more units can be integrated into one module. The above-mentioned integrated module can be implemented in the form of hardware or in the form of a software functional module. If the above-mentioned integrated module is implemented in the form of a software functional module and sold or used as an independent product, it can also be stored in a computer-readable storage medium. The storage medium can be a read-only memory, a disk or an optical disk, etc.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到其各种变化或替换,这些都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any technician familiar with the technical field can easily think of various changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (10)

CN202410201444.5A2024-02-232024-02-23 A multimodal feedback method, device, equipment and storage medium for eloquence trainingActiveCN117788239B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410201444.5ACN117788239B (en)2024-02-232024-02-23 A multimodal feedback method, device, equipment and storage medium for eloquence training

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410201444.5ACN117788239B (en)2024-02-232024-02-23 A multimodal feedback method, device, equipment and storage medium for eloquence training

Publications (2)

Publication NumberPublication Date
CN117788239Atrue CN117788239A (en)2024-03-29
CN117788239B CN117788239B (en)2024-05-31

Family

ID=90402099

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410201444.5AActiveCN117788239B (en)2024-02-232024-02-23 A multimodal feedback method, device, equipment and storage medium for eloquence training

Country Status (1)

CountryLink
CN (1)CN117788239B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118313680A (en)*2024-04-082024-07-09新励成教育科技股份有限公司 A multi-dimensional emotion perception and intelligent eloquence teaching system
CN118378040A (en)*2024-04-012024-07-23新励成教育科技股份有限公司 A method, device and medium for eloquence training based on automatic weight adjustment
CN118378041A (en)*2024-04-012024-07-23新励成教育科技股份有限公司 A method and system for training eloquence based on biosensing and multimodal feedback
CN118410139A (en)*2024-04-102024-07-30新励成教育科技股份有限公司 A method, device and medium for eloquence training based on data retrieval and fusion
CN118657157A (en)*2024-07-022024-09-17新励成教育科技股份有限公司 A speech analysis method, system, device and medium for multi-task parallel processing
CN118735744A (en)*2024-06-262024-10-01新励成教育科技有限公司 A method, device and medium for optimizing eloquence based on instant feedback and tracking
CN118780947A (en)*2024-06-132024-10-15新励成教育科技有限公司 A security-enhanced intelligent eloquence training method, device and medium
CN119106186A (en)*2024-08-062024-12-10新励成教育科技有限公司 Method, system, device and medium for recommending eloquence resources

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112766173A (en)*2021-01-212021-05-07福建天泉教育科技有限公司Multi-mode emotion analysis method and system based on AI deep learning
CN114187544A (en)*2021-11-302022-03-15厦门大学 A Multimodal Automatic Scoring Method for College English Speech
CN115496077A (en)*2022-11-182022-12-20之江实验室 A multi-modal sentiment analysis method and device based on modal observation and scoring
US20230080660A1 (en)*2021-09-072023-03-16Kalyna MileticSystems and method for visual-audio processing for real-time feedback
US11677575B1 (en)*2020-10-052023-06-13mmhmm inc.Adaptive audio-visual backdrops and virtual coach for immersive video conference spaces
CN116484318A (en)*2023-06-202023-07-25新励成教育科技股份有限公司 A speech training feedback method, device and storage medium
CN117057961A (en)*2023-10-122023-11-14新励成教育科技股份有限公司Online talent training method and system based on cloud service
CN117457218A (en)*2023-12-222024-01-26深圳市健怡康医疗器械科技有限公司Interactive rehabilitation training assisting method and system
CN117522643A (en)*2023-12-042024-02-06新励成教育科技股份有限公司 An eloquence training method, device, equipment and storage medium
CN117541445A (en)*2023-12-112024-02-09新励成教育科技股份有限公司 An eloquence training method, system, equipment and medium for virtual environment interaction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11677575B1 (en)*2020-10-052023-06-13mmhmm inc.Adaptive audio-visual backdrops and virtual coach for immersive video conference spaces
CN112766173A (en)*2021-01-212021-05-07福建天泉教育科技有限公司Multi-mode emotion analysis method and system based on AI deep learning
US20230080660A1 (en)*2021-09-072023-03-16Kalyna MileticSystems and method for visual-audio processing for real-time feedback
CN114187544A (en)*2021-11-302022-03-15厦门大学 A Multimodal Automatic Scoring Method for College English Speech
CN115496077A (en)*2022-11-182022-12-20之江实验室 A multi-modal sentiment analysis method and device based on modal observation and scoring
CN116484318A (en)*2023-06-202023-07-25新励成教育科技股份有限公司 A speech training feedback method, device and storage medium
CN117057961A (en)*2023-10-122023-11-14新励成教育科技股份有限公司Online talent training method and system based on cloud service
CN117522643A (en)*2023-12-042024-02-06新励成教育科技股份有限公司 An eloquence training method, device, equipment and storage medium
CN117541445A (en)*2023-12-112024-02-09新励成教育科技股份有限公司 An eloquence training method, system, equipment and medium for virtual environment interaction
CN117457218A (en)*2023-12-222024-01-26深圳市健怡康医疗器械科技有限公司Interactive rehabilitation training assisting method and system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118378040A (en)*2024-04-012024-07-23新励成教育科技股份有限公司 A method, device and medium for eloquence training based on automatic weight adjustment
CN118378041A (en)*2024-04-012024-07-23新励成教育科技股份有限公司 A method and system for training eloquence based on biosensing and multimodal feedback
CN118378041B (en)*2024-04-012025-02-18新励成教育科技有限公司Oral training method and system based on biosensing and multimode feedback
CN118378040B (en)*2024-04-012025-04-15新励成教育科技有限公司 A method, device and medium for eloquence training based on automatic weight adjustment
CN118313680A (en)*2024-04-082024-07-09新励成教育科技股份有限公司 A multi-dimensional emotion perception and intelligent eloquence teaching system
CN118410139A (en)*2024-04-102024-07-30新励成教育科技股份有限公司 A method, device and medium for eloquence training based on data retrieval and fusion
CN118410139B (en)*2024-04-102025-09-16新励成教育科技有限公司Talent training method, device and medium based on data retrieval and fusion
CN118780947A (en)*2024-06-132024-10-15新励成教育科技有限公司 A security-enhanced intelligent eloquence training method, device and medium
CN118735744A (en)*2024-06-262024-10-01新励成教育科技有限公司 A method, device and medium for optimizing eloquence based on instant feedback and tracking
CN118657157A (en)*2024-07-022024-09-17新励成教育科技股份有限公司 A speech analysis method, system, device and medium for multi-task parallel processing
CN119106186A (en)*2024-08-062024-12-10新励成教育科技有限公司 Method, system, device and medium for recommending eloquence resources

Also Published As

Publication numberPublication date
CN117788239B (en)2024-05-31

Similar Documents

PublicationPublication DateTitle
CN117788239B (en) A multimodal feedback method, device, equipment and storage medium for eloquence training
US11798431B2 (en)Public speaking trainer with 3-D simulation and real-time feedback
Zatarain Cabada et al.A virtual environment for learning computer coding using gamification and emotion recognition
CN106663383B (en)Method and system for analyzing a subject
US11393357B2 (en)Systems and methods to measure and enhance human engagement and cognition
CN117522643B (en)Talent training method, device, equipment and storage medium
US20180268821A1 (en)Virtual assistant for generating personal suggestions to a user based on intonation analysis of the user
US12067892B1 (en)System and method for vocal training
CN117541444B (en) An interactive virtual reality eloquence expression training method, device, equipment and medium
CN117541445B (en) A virtual environment interactive eloquence training method, system, device and medium
Paay et al.Can digital personal assistants persuade people to exercise?
CN118173119A (en) A method for training eloquence based on dynamic adjustment mechanism
Bahreini et al.Communication skills training exploiting multimodal emotion recognition
KR20240115759A (en)Apparatus and method for providing learning experience of english based on artificial intelligence chatbot
CN117788235A (en)Personalized talent training method, system, equipment and medium
CN118135856B (en) A method for training eloquence based on document editing and communication
CN118378040A (en) A method, device and medium for eloquence training based on automatic weight adjustment
Mirzoyeva et al.Formation of auditory and speech competences in learning English based on neural network technologies: psycholinguistic aspect
CN117635383A (en) A virtual tutor and multi-person collaborative eloquence training system, method and equipment
US20220020289A1 (en)Method and apparatus for speech language training
Kumagai et al.Scenario-based dialogue system based on pause detection toward daily health monitoring
GuoA practical study of English pronunciation correction using deep learning techniques
Laverde ManotasIntegrating large language model-based agents into a virtual patient chatbot for clinical anamnesis training
Huhta et al.Learning theories in pedagogical agent research: A two-phased systematic review
ForghaniSystem Development and Evaluation of a Social Robot as a Public Speaking Rehearsal Coach

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CP03Change of name, title or address

Address after:Unit 1403, 1404, 1405, 1406, 1407, 1408, Floor 14, No. 368, Lijiao Road, Haizhu District, Guangzhou, Guangdong, 510000

Patentee after:New Licheng Education Technology Co.,Ltd.

Country or region after:China

Address before:Unit 03, 04, 05, 06, 07, 08, 14th Floor, No. 368 Lijiao Road, Haizhu District, Guangzhou City, Guangdong Province

Patentee before:Xinlicheng Education Technology Co.,Ltd.

Country or region before:China

CP03Change of name, title or address

[8]ページ先頭

©2009-2025 Movatter.jp