Movatterモバイル変換


[0]ホーム

URL:


CN115463413A - Interaction device, control method and interaction system applied to multi-person interaction scene - Google Patents

Interaction device, control method and interaction system applied to multi-person interaction scene
Download PDF

Info

Publication number
CN115463413A
CN115463413ACN202211213877.XACN202211213877ACN115463413ACN 115463413 ACN115463413 ACN 115463413ACN 202211213877 ACN202211213877 ACN 202211213877ACN 115463413 ACN115463413 ACN 115463413A
Authority
CN
China
Prior art keywords
interactive
information
posture
user
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211213877.XA
Other languages
Chinese (zh)
Inventor
翁志彬
刘勇
周克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pimax Technology Shanghai Co ltd
Original Assignee
Pimax Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pimax Technology Shanghai Co ltdfiledCriticalPimax Technology Shanghai Co ltd
Priority to CN202211213877.XApriorityCriticalpatent/CN115463413A/en
Publication of CN115463413ApublicationCriticalpatent/CN115463413A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application discloses an interaction device, a control method and an interaction system applied to a multi-person interaction scene, when a plurality of interaction devices are jointly interacted, a gesture obtaining module is used for obtaining first gesture information of a user corresponding to a current interaction device in real time, each interaction device obtains gesture information of the corresponding user and gesture information of users corresponding to other interaction devices in the interaction system, second gesture information of the users corresponding to other interaction devices is received, and the action gesture of a first virtual avatar and the action gesture of a second virtual avatar are respectively generated according to the first gesture information and the second gesture information so as to reflect the interaction of the user corresponding to the current interaction device and the users corresponding to other interaction devices, so that the interactive display of multiple persons in the same scene is realized, and the vividness and the experience sense of multi-person interaction are improved.

Description

Translated fromChinese
应用于多人互动场景下的互动装置、控制方法和互动系统Interaction device, control method and interaction system applied in multi-person interaction scene

技术领域technical field

本申请涉及交互设备技术领域,具体涉及一种应用于多人互动场景下的互动装置、控制方法和互动系统。The present application relates to the technical field of interactive equipment, in particular to an interactive device, a control method, and an interactive system applied in a multi-person interactive scene.

背景技术Background technique

虚拟现实技术(英文名称:Virtual Reality,缩写为VR),又称虚拟实境或灵境技术。虚拟现实技术包括计算机、电子信息、仿真技术,其基本实现方式是以计算机技术为主,利用并综合三维图形技术、多媒体技术、仿真技术、显示技术、伺服技术等多种高科技的最新发展成果,借助计算机等设备产生一个逼真的三维视觉、触觉、嗅觉等多种感官体验的虚拟世界,从而使处于虚拟世界中的人产生一种身临其境的感觉。目前VR设备主要通过对头或手定位,以实现对穿戴用户在虚拟环境下虚拟化身的实时动态追踪,以增强用户在虚拟环境中的沉浸式体验。Virtual reality technology (English name: Virtual Reality, abbreviated as VR), also known as virtual reality or spirit technology. Virtual reality technology includes computer, electronic information, and simulation technology. Its basic implementation method is based on computer technology, using and integrating the latest development achievements of various high-tech technologies such as three-dimensional graphics technology, multimedia technology, simulation technology, display technology, and servo technology. With the help of computers and other equipment, a virtual world with realistic three-dimensional vision, touch, smell and other sensory experiences can be generated, so that people in the virtual world can have an immersive feeling. At present, VR devices mainly use head or hand positioning to realize real-time dynamic tracking of the virtual avatar of the wearable user in the virtual environment, so as to enhance the user's immersive experience in the virtual environment.

但是,掌机(手持设备(例如手机、平板等)、游戏掌机等)类产品一般自身不具有实时定位追踪功能,从而导致使用具有定位功能的VR设备的用户无法与掌机用户实现多人同时在同一场景下互动的显示效果,VR设备用户与掌机设备用户之间、或是多个掌机之间也无法实现同一场景下实时动态状态追踪的互动,从而导致多人互动场景的体验效果不佳。However, handheld devices (handheld devices (such as mobile phones, tablets, etc.), game handhelds, etc.) products generally do not have real-time location tracking functions, which leads to the inability of users using VR devices with location-based functions to realize multiplayer communication with handheld users. At the same time, the interactive display effect in the same scene, the real-time dynamic state tracking interaction in the same scene cannot be realized between the VR device user and the handheld device user, or between multiple handheld devices, resulting in the experience of multi-person interactive scenes. not effectively.

发明内容Contents of the invention

为了解决上述技术问题,提出了本申请。本申请的实施例提供了一种应用于多人互动场景下的互动装置、控制方法和互动系统,解决了上述技术问题。In order to solve the above-mentioned technical problems, the present application is proposed. Embodiments of the present application provide an interactive device, a control method, and an interactive system applied in a scene of multi-person interaction, which solve the above-mentioned technical problems.

根据本申请的一个方面,提供了一种应用于多人互动场景下的互动装置,应用于互动系统中,所述互动系统包括多个通讯连接的所述互动装置;所述互动装置包括:姿态获取模块,所述姿态获取模块用于获取当前互动装置对应用户的第一姿态信息;信息发送模块,所述信息发送模块与所述姿态获取模块通讯连接,用于将所述第一姿态信息发送至所述其他互动装置;信息接收模块,所述信息接收模块与所述互动系统中的其他互动装置通讯连接,用于接收所述其他互动装置对应用户的第二姿态信息;以及执行控制模块,所述执行控制模块与所述姿态获取模块和所述信息接收模块通讯连接,用于根据所述第一姿态信息和所述第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,所述第一虚拟化身和所述第二虚拟化身在所述当前互动装置的虚拟场景中显示且分别表征所述当前互动装置对应用户和所述其他互动装置对应用户。According to one aspect of the present application, there is provided an interactive device applied in a multi-person interaction scene, which is applied in an interactive system, and the interactive system includes a plurality of interactive devices connected by communication; the interactive device includes: An acquisition module, the posture acquisition module is used to obtain the first posture information of the user corresponding to the current interactive device; an information transmission module, the information transmission module is connected to the posture acquisition module by communication, and is used to send the first posture information To the other interactive devices; an information receiving module, the information receiving module communicates with other interactive devices in the interactive system, and is used to receive the second posture information of the user corresponding to the other interactive devices; and an execution control module, The execution control module is communicatively connected with the posture acquisition module and the information receiving module, and is used to generate the action posture of the first virtual avatar and the second virtual avatar respectively according to the first posture information and the second posture information action gesture; wherein, the first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the corresponding user of the current interactive device and the corresponding user of the other interactive device.

根据本申请的另一个方面,提供了一种应用于多人互动场景下的身体实时追踪控制方法,应用于多人互动场景下互动系统中的互动装置中,所述互动系统包括多个通讯连接的所述互动装置;所述应用于多人互动场景下的互动方法包括:获取当前互动装置对应用户的第一姿态信息;将所述第一姿态信息发送至所述互动系统中的其他互动装置;接收所述其他互动装置对应用户的第二姿态信息;以及根据所述第一姿态信息和所述第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,所述第一虚拟化身和所述第二虚拟化身在所述当前互动装置的虚拟场景中显示且分别表征所述当前互动装置对应用户和所述其他互动装置对应用户。According to another aspect of the present application, a real-time body tracking control method applied in a multi-person interaction scene is provided, which is applied to an interactive device in an interactive system in a multi-person interactive scene, and the interactive system includes a plurality of communication connections The interaction device; the interaction method applied in the multi-person interaction scene includes: obtaining the first gesture information of the user corresponding to the current interaction device; sending the first gesture information to other interaction devices in the interaction system ; receiving the second gesture information of the user corresponding to the other interactive device; and generating the action gesture of the first virtual avatar and the action gesture of the second avatar respectively according to the first gesture information and the second gesture information; wherein, The first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the corresponding user of the current interactive device and the corresponding user of the other interactive device.

根据本申请的另一个方面,提供了一种应用于多人互动场景下的互动系统,包括多个如上述任一项所述且相互通讯连接的互动装置;其中,所述互动装置包括多个掌机、或是多个虚拟现实设备、或是至少一个掌机与至少一个虚拟现实设备。According to another aspect of the present application, there is provided an interactive system applied in multi-person interactive scenarios, including a plurality of interactive devices as described in any one of the above and connected to each other through communication; wherein, the interactive devices include multiple Handheld, or multiple virtual reality devices, or at least one handheld and at least one virtual reality device.

本申请提供的一种应用于多人互动场景下的互动装置、控制方法和互动系统,通过在相互通讯连接的多个互动装置的每个互动装置中设置姿态获取模块以在互动过程中实时获取当前互动装置对应用户的第一姿态信息,并且设置与互动系统中的其他互动装置通讯连接的信息接收模块和信息发送模块,以接收其他互动装置对应用户的第二姿态信息和发送第一姿态信息至其他互动装置,另外,设置与姿态获取模块和信息接收模块通讯连接的执行控制模块,在获取了第一姿态信息和第二姿态信息后,执行控制模块根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;即在多个互动装置联合互动时,每个互动装置都获取对应用户的姿态信息和互动系统中其他互动装置对应用户的姿态信息,根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The present application provides an interactive device, a control method and an interactive system applied to a multi-person interactive scene, by setting a gesture acquisition module in each interactive device of a plurality of interactive devices connected to each other to obtain real-time information during the interactive process. The current interactive device corresponds to the first gesture information of the user, and an information receiving module and an information sending module are set up to communicate with other interactive devices in the interactive system to receive second gesture information and send the first gesture information corresponding to the user of other interactive devices To other interactive devices, in addition, an execution control module that communicates with the posture acquisition module and the information receiving module is set. After obtaining the first posture information and the second posture information, the execution control module Generate the action posture of the first virtual avatar and the action posture of the second virtual avatar respectively; that is, when multiple interactive devices jointly interact, each interactive device obtains the corresponding user's posture information and other interactive devices in the interactive system corresponding to the user's posture Information, according to the first posture information and the second posture information to generate the action posture of the first virtual avatar and the movement posture of the second virtual avatar respectively, to reflect the interaction between the corresponding user of the current interactive device and the corresponding user of other interactive devices, so as to realize multi-person The interactive display in the same scene improves the vividness and experience of multi-person interaction.

附图说明Description of drawings

通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。The above and other objects, features and advantages of the present application will become more apparent through a more detailed description of the embodiments of the present application in conjunction with the accompanying drawings. The accompanying drawings are used to provide a further understanding of the embodiments of the present application, and constitute a part of the specification, and are used together with the embodiments of the present application to explain the present application, and do not constitute limitations to the present application. In the drawings, the same reference numerals generally represent the same components or steps.

图1是本申请一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。Fig. 1 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by an exemplary embodiment of the present application.

图2是本申请另一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。Fig. 2 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图3是本申请另一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。Fig. 3 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图4是本申请一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。Fig. 4 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by an exemplary embodiment of the present application.

图5是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。Fig. 5 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图6是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。Fig. 6 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图7是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。Fig. 7 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图8是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。Fig. 8 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application.

图9是本申请一示例性实施例提供的一种多人互动场景下的互动系统的结构示意图。Fig. 9 is a schematic structural diagram of an interaction system in a multi-person interaction scenario provided by an exemplary embodiment of the present application.

图10是本申请一示例性实施例提供的电子设备的结构图。Fig. 10 is a structural diagram of an electronic device provided by an exemplary embodiment of the present application.

具体实施方式detailed description

下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. Apparently, the described embodiments are only some of the embodiments of the present application, rather than all the embodiments of the present application. It should be understood that the present application is not limited by the exemplary embodiments described here.

图1是本申请一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。该互动装置应用于互动系统中,互动系统包括多个通讯连接的互动装置;如图1所示,该互动装置1包括:姿态获取模块11、信息接收模块12、信息发送模块14以及执行控制模块13;其中,姿态获取模块11用于获取当前互动装置对应用户的第一姿态信息,信息接收模块12与互动系统中的其他互动装置通讯连接,用于接收其他互动装置对应用户的第二姿态信息,其中第二姿态信息由设置于其他互动装置上的姿态获取模块获取,信息发送模块14与姿态获取模块11通讯连接,用于将第一姿态信息发送至其他互动装置,执行控制模块13与姿态获取模块11和信息接收模块12通讯连接,用于根第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,第一虚拟化身和第二虚拟化身在当前互动装置的虚拟场景中显示且分别表征当前互动装置对应用户和其他互动装置对应用户。Fig. 1 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by an exemplary embodiment of the present application. The interactive device is applied in an interactive system, and the interactive system includes a plurality of interactive devices connected by communication; as shown in Figure 1, theinteractive device 1 includes: a posture acquisition module 11, an information receiving module 12, an information sending module 14 and an execution control module 13; Wherein, the posture acquisition module 11 is used to obtain the first posture information of the user corresponding to the current interactive device, and the information receiving module 12 is connected to other interactive devices in the interactive system for receiving the second posture information of the user corresponding to the other interactive device , wherein the second posture information is obtained by the posture acquisition module arranged on other interactive devices, the information sending module 14 is connected to the posture acquisition module 11 in communication, and is used to send the first posture information to other interactive devices, and the execution control module 13 communicates with the posture acquisition module The acquisition module 11 and the information receiving module 12 are connected in communication, and are used to generate the action gestures of the first virtual avatar and the action gestures of the second avatar respectively based on the first posture information and the second posture information; wherein, the first avatar and the second avatar The virtual avatar is displayed in the virtual scene of the current interactive device and respectively represents the corresponding user of the current interactive device and the corresponding users of other interactive devices.

随着虚拟现实技术的不断发展,越来越多的游戏都可以通过虚拟现实呈现给用户,从而提高用户的游戏体验感。通常虚拟现实设备上会设置传感器或摄像头等仪器以捕获其控制部件的位置变化,从而将该位置变化转换为控制指令输入主机,以实现模拟用户动作的控制指令,从而使得用户沉浸在游戏中,继而提高了游戏真实体验感。掌上游戏机(掌机)等设备通常是不具备定位功能的,掌机的操作通常是由遥控手柄或操纵杆等实现的。因此,掌机通常适用于单视角的游戏,即无法与其他掌机或虚拟显示设备共同在同一场景下游戏(即单个设备中可以显示多个设备在同一场景中),也就是说,目前掌机不支持多视角的交互游戏。With the continuous development of virtual reality technology, more and more games can be presented to users through virtual reality, thereby improving the user's gaming experience. Usually, sensors or cameras are installed on the virtual reality device to capture the position changes of its control parts, so that the position changes are converted into control commands and input to the host to realize control commands that simulate user actions, so that users are immersed in the game. This improves the real experience of the game. Devices such as handheld game consoles (handheld devices) usually do not have a positioning function, and the operation of the handheld device is usually realized by a remote control handle or a joystick. Therefore, handhelds are usually suitable for single-view games, that is, they cannot play games in the same scene with other handhelds or virtual display devices (that is, a single device can display multiple devices in the same scene). The machine does not support multi-view interactive games.

为了解决上述问题,实现掌机等游戏设备之间、或掌机与虚拟显示设备之间的多人动态追踪互动游戏,本申请提出了一种多人互动场景下的互动装置,该互动装置可以是掌机、也可以是虚拟显示设备,通过在互动系统中的每个互动装置上设置姿态获取模块11、信息接收模块12、信息发送模块14以及执行控制模块13以实现多个互动装置的动态状态追踪互动,姿态获取模块11用来实现自身用户的第一姿态信息的获取,信息接收模块12用来接收其他互动装置的第二姿态信息,信息发送模块14用来将第一姿态信息发送至其他互动装置,执行控制模块13用于根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,第一虚拟化身和第二虚拟化身在当前互动装置的虚拟场景中显示且分别表征当前互动装置对应用户和其他互动装置对应用户。具体的,姿态获取模块11可以是虚拟显示设备上的摄像头惯性传感器等器件;信息接收模块12可以是无线通讯器等实现数据接收的器件。In order to solve the above problems and realize multiplayer dynamic tracking interactive games between game devices such as handhelds, or between handhelds and virtual display devices, this application proposes an interactive device in a multiplayer interactive scene. The interactive device can It is a handheld device or a virtual display device. By setting a gesture acquisition module 11, an information receiving module 12, an information sending module 14 and an execution control module 13 on each interactive device in the interactive system, the dynamic dynamics of multiple interactive devices can be realized. For state tracking interaction, the posture acquisition module 11 is used to obtain the first posture information of the user itself, the information receiving module 12 is used to receive the second posture information of other interactive devices, and the information sending module 14 is used to send the first posture information to For other interactive devices, the execution control module 13 is used to generate the action gestures of the first virtual avatar and the action gestures of the second avatar respectively according to the first posture information and the second posture information; wherein, the first avatar and the second avatar are in the The user corresponding to the current interactive device and the corresponding user of other interactive devices are displayed in the virtual scene of the current interactive device and represent respectively. Specifically, the posture acquiring module 11 may be a device such as a camera inertial sensor on the virtual display device; the information receiving module 12 may be a device such as a wireless communicator that realizes data reception.

通过在互动系统中的每个互动装置上设置姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互,即每个互动装置可以获取互动系统中所有互动装置的姿态信息。每个互动装置在获取了互动系统中所有互动装置的姿态信息后,执行控制模块根据自身的姿态信息和其他互动装置的姿态信息在当前互动装置中生成并显示对应第一姿态信息和第二姿态信息的第一虚拟化身的动作姿态和第二虚拟化身的动作姿态。其中,该动作姿态表现为当前互动装置的显示设备中虚拟化身的游戏动作(例如球类对战游戏中的接球动作);执行控制模块同时根据第二姿态信息在当前互动装置的显示设备中显示其他互动装置对应用户的动作姿态,其中该动作姿态表现为其他互动装置在当前互动装置的显示设备中虚拟化身的游戏动作(例如球类对战游戏中的打球动作)。通过将其他互动装置和当前互动装置对应的虚拟化身的游戏动作实时展示在当前互动装置的显示设备中,从而可以提高游戏的真实性和可操作性,例如上述的球类对战游戏,用户在当前互动装置中可以时刻观察到对手的打球或接球动作、以及对手的站位等,从而更有针对性的打球或接球,从而可以更加接近现实球类对战的操作感,继而提高了游戏的体验感。By setting a posture acquisition module on each interactive device in the interactive system to obtain its own posture information, and using the information receiving model to obtain the posture information of other interactive devices, the posture information between multiple interactive devices in the interactive system can be realized. interaction, that is, each interactive device can obtain gesture information of all interactive devices in the interactive system. After each interactive device obtains the posture information of all the interactive devices in the interactive system, the execution control module generates and displays the corresponding first posture information and second posture in the current interactive device according to its own posture information and the posture information of other interactive devices An action posture of the first avatar and an action posture of the second avatar in the information. Wherein, the action gesture is represented as the game action of the virtual avatar in the display device of the current interactive device (for example, the ball catching action in the ball game); the execution control module is simultaneously displayed on the display device of the current interactive device according to the second posture information The other interactive device corresponds to the action gesture of the user, wherein the action gesture represents the game action of the virtual avatar of the other interactive device on the display device of the current interactive device (such as the ball playing action in the ball game). By displaying the game actions of virtual avatars corresponding to other interactive devices and the current interactive device on the display device of the current interactive device in real time, the authenticity and operability of the game can be improved. In the interactive device, you can always observe the opponent's playing or catching movements, as well as the opponent's position, so that you can play or catch the ball in a more targeted manner, which can be closer to the operational sense of the real ball game, and then improve the game. sense of experience.

由于互动系统中的多个互动装置之间需要相互发送姿态信息,因此,可以在每个互动装置中设置信息发送模块14,利用信息发送模块14将姿态获取模块11(即当前互动装置的姿态获取模块)实时获取的当前互动装置的姿态信息发送至其他互动装置。通过每个互动装置的信息发送模块14可以实现互动系统中所有互动装置之间的姿态信息的发送,每个互动装置在获取了互动系统中所有互动装置的姿态信息后,执行控制模块根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,第一虚拟化身和第二虚拟化身在当前互动装置的虚拟场景中显示且分别表征当前互动装置对应用户和其他互动装置对应用户。通过将所有互动装置对应的虚拟化身的游戏动作实时展示在每个互动装置的显示界面中,从而可以提高游戏的真实性和可操作性。Since gesture information needs to be sent to each other between a plurality of interactive devices in the interactive system, an information sending module 14 can be set in each interactive device, and the gesture acquisition module 11 (that is, the gesture acquisition of the current interactive device) can be obtained by using the information sending module 14. module) and send the posture information of the current interactive device acquired in real time to other interactive devices. The information transmission module 14 of each interactive device can realize the transmission of gesture information between all interactive devices in the interactive system. After each interactive device has obtained the gesture information of all interactive devices in the interactive system, the execution control module executes the control module according to the first The posture information and the second posture information respectively generate the action posture of the first virtual avatar and the movement posture of the second virtual avatar; wherein, the first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the current interaction Device-to-user and other interactive device-to-user. By displaying the game actions of the avatars corresponding to all the interactive devices on the display interface of each interactive device in real time, the authenticity and operability of the game can be improved.

应当理解,本申请中的信息发送模块14和信息接收模块12是逻辑划分的两个功能模块,在实际结构中,信息发送模块14和信息接收模块12可以是一个集成模块,例如无线通讯模块,利用无线通讯可以实现信息和数据的发送和接收。It should be understood that the information sending module 14 and the information receiving module 12 in this application are two logically divided functional modules. In an actual structure, the information sending module 14 and the information receiving module 12 can be an integrated module, such as a wireless communication module, Information and data can be sent and received using wireless communication.

本申请提供的一种多人互动场景下的互动装置,通过在相互通讯连接的多个互动装置的每个互动装置中设置姿态获取模块以在互动过程中实时获取当前互动装置对应用户的第一姿态信息,并且设置与互动系统中的其他互动装置通讯连接的信息接收模块和信息发送模块,以接收其他互动装置对应用户的第二姿态信息和发送第一姿态信息至其他互动装置,另外,设置与姿态获取模块和信息接收模块通讯连接的执行控制模块,在获取了第一姿态信息和第二姿态信息后,执行控制模块根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;即在多个互动装置联合互动时,每个互动装置都获取对应用户的姿态信息和互动系统中其他互动装置对应用户的姿态信息,根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The application provides an interactive device in a multi-person interactive scene. By setting a gesture acquisition module in each interactive device of a plurality of interactive devices connected to each other, the first gesture of the user corresponding to the current interactive device can be obtained in real time during the interaction process. Gesture information, and set up an information receiving module and an information sending module that communicate with other interactive devices in the interactive system to receive the second posture information of other interactive devices corresponding to the user and send the first posture information to other interactive devices. In addition, set The execution control module communicates with the posture acquisition module and the information receiving module. After obtaining the first posture information and the second posture information, the execution control module respectively generates the actions of the first virtual avatar according to the first posture information and the second posture information. posture and the action posture of the second virtual avatar; that is, when multiple interactive devices jointly interact, each interactive device obtains the posture information corresponding to the user and the posture information corresponding to the user of other interactive devices in the interactive system, according to the first posture information and The second gesture information generates the action gestures of the first virtual avatar and the second avatar respectively to reflect the interaction between the user corresponding to the current interactive device and the user corresponding to other interactive devices, so as to realize the interactive display of multiple people in the same scene , thereby improving the vividness and experience of multi-person interaction.

在一实施例中,当前互动装置可以包括主机,姿态获取模块包括摄像头及惯性传感器,摄像头和惯性传感器设置于主机上,摄像头获取当前互动装置对应用户的视频信息,惯性传感器用来获得用户姿态的IMU数据信息,摄像头将视频信息传输至主机,惯性传感器将IMU数据信息传输至主机,主机处理视频信息及IMU数据信息得到第一姿态信息。In one embodiment, the current interactive device may include a host, and the attitude acquisition module includes a camera and an inertial sensor. The camera and the inertial sensor are arranged on the host, and the camera acquires the video information corresponding to the user of the current interactive device, and the inertial sensor is used to obtain the user's attitude. The IMU data information, the camera transmits the video information to the host, the inertial sensor transmits the IMU data information to the host, and the host processes the video information and the IMU data information to obtain the first attitude information.

当前互动装置包括一个主机和设置在主机上的摄像头、惯性传感器,利用摄像头和惯性传感器分别获取当前互动装置对应用户的视频信息和用户姿态的IMU数据信息,主机对该视频信息和IMU数据信息进行处理后得到当前互动装置的第一姿态信息。The current interactive device includes a host, a camera and an inertial sensor arranged on the host, and the camera and the inertial sensor are used to respectively obtain the video information of the user corresponding to the current interactive device and the IMU data information of the user's posture, and the host performs the video information and IMU data information After processing, the first posture information of the current interactive device is obtained.

在一实施例中,当前互动装置包括主机,姿态获取模块包括摄像头及惯性传感器,摄像头和惯性传感器设置于主机上,摄像头获取当前互动装置对应用户的视频信息,惯性传感器用来获得用户姿态的IMU数据信息,摄像头将视频信息发送至云端服务器,惯性传感器将IMU数据信息发送至云端服务器,云端服务器处理视频信息及IMU数据信息得到第一姿态信息,云端服务器将第一姿态信息发送至主机。In one embodiment, the current interactive device includes a host, and the attitude acquisition module includes a camera and an inertial sensor, the camera and the inertial sensor are arranged on the host, the camera acquires video information corresponding to the user of the current interactive device, and the inertial sensor is used to obtain the IMU of the user's attitude For data information, the camera sends the video information to the cloud server, the inertial sensor sends the IMU data information to the cloud server, the cloud server processes the video information and the IMU data information to obtain the first attitude information, and the cloud server sends the first attitude information to the host.

当前互动装置包括一个主机和设置在主机上的摄像头、惯性传感器,利用摄像头和惯性传感器分别获取当前互动装置对应用户的视频信息和用户姿态的IMU数据信息,摄像头将获取到的视频信息发送至云端服务器,惯性传感器将获取到的IMU数据信息发送至云端服务器,云端服务器对该视频信息和IMU数据信息进行处理后得到当前互动装置的第一姿态信息,云端服务器在得到第一姿态信号后发送至主机上,以降低主机的计算量,从而提高主机的性能。The current interactive device includes a host, a camera and an inertial sensor installed on the host. The camera and the inertial sensor are used to obtain the video information of the current interactive device corresponding to the user and the IMU data information of the user's posture, and the camera sends the acquired video information to the cloud. The server, the inertial sensor sends the obtained IMU data information to the cloud server, and the cloud server processes the video information and IMU data information to obtain the first attitude information of the current interactive device, and the cloud server sends the first attitude signal to Host to reduce the amount of computation on the host, thereby improving the performance of the host.

在一实施例中,第一姿态信息包括当前互动装置对应用户的关节处的关键特征点,第二姿态信息包括其他互动装置对应用户的关节处的关键特征点;执行控制模块抓取当前互动装置对应用户的视频信息中当前互动装置对应用户的关节处的关键特征点,执行控制模块抓取其他互动装置对应用户的视频信息中其他互动装置对应用户的关节处的关键特征点,执行控制模块根据当前互动装置对应用户的关节处的关键特征点追踪当前互动装置对应用户的身体姿势,并将当前互动装置对应用户的身体姿势映射至第一虚拟化身,执行控制模块根据其他互动装置对应用户的关节处的关键特征点追踪其他互动装置对应用户的身体姿势,并将其他互动装置对应用户的身体姿势映射至第二虚拟化身。In one embodiment, the first posture information includes key feature points at the joints of the current interactive device corresponding to the user, and the second posture information includes key feature points at the joints of other interactive devices corresponding to the user; the execution control module grabs the current interactive device Corresponding to the key feature points at the joints of the current interactive device corresponding to the user in the video information corresponding to the user, the execution control module grabs the key feature points at the joints of other interactive devices corresponding to the user in the video information of other interactive devices corresponding to the user, and executes the control module according to The key feature points at the joints of the user corresponding to the current interactive device track the body posture of the user corresponding to the current interactive device, and map the body posture of the user corresponding to the current interactive device to the first virtual avatar, and execute the control module according to the joints of other interactive devices corresponding to the user The key feature points at track the body postures of other interactive devices corresponding to users, and map the body postures of other interactive devices corresponding to users to the second avatar.

由于视频信息中的数据过多,若全部处理将花费大量的时间,并且也会影响交互的流畅性和效率。因此,在获取了视频信息后,执行控制模块仅抓取视频信息中对应用户关节处的关键特征点(例如手肘等),执行控制模块根据关节处的关键特征点追踪用户的身体姿势,并将用户的身体姿势映射至第一虚拟化身,即执行控制模块通过提取视频信息中对应用户关节处的关键特征点后,根据该关节处的关键特征点追踪得到用户的身体姿势,然后将用户的身体姿势映射到虚拟化身上,以在虚拟化身上反映出用户的对应姿势,从而提高交互的真实性。Since there is too much data in the video information, it will take a lot of time to process all of them, and it will also affect the fluency and efficiency of the interaction. Therefore, after acquiring the video information, the execution control module only captures the key feature points (such as elbows, etc.) corresponding to the user's joints in the video information, and the execution control module tracks the user's body posture according to the key feature points at the joints, and Map the user's body posture to the first virtual avatar, that is, after the execution control module extracts the key feature points corresponding to the user's joints in the video information, track the user's body posture according to the key feature points at the joints, and then use the user's Body poses are mapped to the avatar to reflect the user's corresponding poses on the avatar, thereby improving the realism of the interaction.

图2是本申请另一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。如图2所示,当前互动装置1可以包括虚拟现实设备,该虚拟显示设备包括VR头显10和手柄20,姿态获取模块11通过对VR头显10和手柄20进行空间定位以获取当前互动装置对应用户的第一姿态信息。Fig. 2 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by another exemplary embodiment of the present application. As shown in FIG. 2 , the currentinteractive device 1 may include a virtual reality device, and the virtual display device includes a VR head-mounteddisplay 10 and ahandle 20 . Corresponding to the first gesture information of the user.

虚拟现实就是虚拟和现实相互结合。从理论上来讲,虚拟现实技术(VR)是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,使用户沉浸到该环境中。虚拟现实技术就是利用现实生活中的数据,通过计算机技术产生的电子信号,将其与各种输出设备结合使其转化为能够让人们感受到的现象,这些现象可以是现实中真真切切的物体,也可以是我们肉眼所看不到的物质,通过三维模型表现出来。因为这些现象不是我们直接所能看到的,而是通过计算机技术模拟出来的现实中的世界,故称为虚拟现实。Virtual reality is the combination of virtual and reality. In theory, virtual reality technology (VR) is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulated environment and immerses users in the environment. Virtual reality technology is to use the data in real life, through the electronic signal generated by computer technology, combine it with various output devices to transform it into a phenomenon that people can feel. These phenomena can be real objects in reality. , it can also be a substance invisible to our naked eyes, which is represented by a three-dimensional model. Because these phenomena are not what we can see directly, but the real world simulated by computer technology, it is called virtual reality.

若当前互动装置为虚拟现实设备,该虚拟现实设备包括VR头显10和手柄20,姿态获取模块11可以设置于头显10和/或手柄20中,即姿态获取模块11可以单独设置于VR头显10中(例如设置于VR头显10上的摄像头以获取手柄20的图像,并分析该图像以得到手柄20的位置信息),也可以单独设置于手柄20中(例如设置于手柄20中的激光定位发射器或接收器,通过发射激光信号至标定点或接收标定点发射的激光信号以实现手柄20的定位),还可以同时设置于VR头显10和手柄20中,应当理解,本申请中姿态获取模块11可以根据实际需求设定其位置,只要能够满足互动游戏的需求即可。用户头戴VR头显10且手握手柄20,用户通过手动控制手柄20(可以包括点击手柄20上的按钮、在手柄20上滑动、移动手柄20等操作)发出控制指令,以实现游戏操控,并且虚拟现实设备的主机接收该控制指令且执行该控制指令,并且将执行该控制指令的结果展示于VR头显10可以观察到的显示界面(可以是实体的显示屏,也可以是虚拟屏幕)上。If the current interactive device is a virtual reality device, the virtual reality device includes aVR head display 10 and ahandle 20, the posture acquisition module 11 can be arranged in thehead display 10 and/or thehandle 20, that is, the posture acquisition module 11 can be separately arranged in the VR head In the display 10 (for example, a camera installed on theVR head display 10 to obtain an image of thehandle 20, and analyze the image to obtain the position information of the handle 20), it can also be separately arranged in the handle 20 (for example, a camera installed in thehandle 20 The laser positioning transmitter or receiver, by sending a laser signal to the calibration point or receiving the laser signal emitted by the calibration point to realize the positioning of the handle 20), can also be set in theVR head display 10 and thehandle 20 at the same time. It should be understood that the present application The position of the middle posture acquisition module 11 can be set according to actual needs, as long as the needs of the interactive game can be met. The user wears the VR head-mounteddisplay 10 and holds thehandle 20 in his hand, and the user issues control commands by manually controlling the handle 20 (which may include clicking a button on thehandle 20, sliding on thehandle 20, moving thehandle 20, etc.) to realize game control. And the host of the virtual reality device receives the control command and executes the control command, and displays the result of executing the control command on the display interface (which can be a physical display screen or a virtual screen) that can be observed by the VR head-mounteddisplay 10 superior.

虚拟现实设备利用姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互。虚拟现实设备在获取了互动系统中所有互动装置的姿态信息后,执行控制模块根据根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The virtual reality device uses the posture acquisition module to obtain its own posture information, and uses the information receiving model to obtain the posture information of other interactive devices, so that the interaction of posture information between multiple interactive devices in the interactive system can be realized. After the virtual reality device acquires the posture information of all the interactive devices in the interactive system, the execution control module generates the action posture of the first virtual avatar and the movement posture of the second virtual avatar according to the first posture information and the second posture information respectively, to Reflecting the interaction between the corresponding user of the current interactive device and the corresponding user of other interactive devices, so as to realize the interactive display of multiple people in the same scene, thereby improving the vividness and experience of the multi-person interaction.

图3是本申请另一示例性实施例提供的一种多人互动场景下的互动装置的结构示意图。如图3所示,当前互动装置1可以包括掌机30,掌机30包括基座、主机和手柄,其中,主机设置于基座上,手柄和主机分离设置,姿态获取模块11设置于主机中,姿态获取模块11录制当前互动装置对应用户的视频数据以获取当前互动装置对应用户的第一姿态信息。Fig. 3 is a schematic structural diagram of an interaction device in a multi-person interaction scenario provided by another exemplary embodiment of the present application. As shown in Figure 3, the currentinteractive device 1 can include ahandheld device 30, and thehandheld device 30 includes a base, a host and a handle, wherein the host is arranged on the base, the handle is separated from the host, and the attitude acquisition module 11 is arranged in the host The gesture acquiring module 11 records video data of the user corresponding to the current interactive device to acquire first gesture information of the user corresponding to the current interactive device.

掌上游戏机(Handheld game console),简称掌机,又名便携式游戏机、手提游戏机或携带型游乐器等。指方便携带的小型专门游戏机,它可以随时随地运行电子游戏软件。Handheld game console (Handheld game console), referred to as handheld, also known as portable game console, portable game console or portable game console. Refers to a portable small dedicated game console that can run video game software anytime, anywhere.

若当前互动装置为掌机,该掌机30可以包括显示屏和操作部件(例如操作杆、按钮、触控屏、手柄等),姿态获取模块11设置于掌机30的主机中,应当理解,本申请中姿态获取模块11可以根据实际需求设定其位置,只要能够满足互动游戏的需求即可。用户通过手动控制操作杆、点击按钮、在触控屏上滑动、移动手柄等操作发出控制指令,以实现游戏操控,并且掌机30的主机接收该控制指令且执行该控制指令,并且将执行该控制指令的结果展示于显示屏上。If the current interactive device is a handheld device, thehandheld device 30 can include a display screen and operating components (such as joysticks, buttons, touch screens, handles, etc.), and the posture acquisition module 11 is arranged in the host computer of thehandheld device 30. It should be understood that In this application, the posture acquisition module 11 can set its position according to actual needs, as long as it can meet the needs of the interactive game. The user issues control commands by manually controlling the joystick, clicking the button, sliding on the touch screen, moving the handle, etc., to achieve game manipulation, and the host of thehandheld device 30 receives and executes the control command, and will execute the control command. The results of the control commands are shown on the display.

掌机利用姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互。The handheld uses the attitude acquisition module to obtain its own attitude information, and uses the information receiving model to acquire the attitude information of other interactive devices, so that the interaction of attitude information between multiple interactive devices in the interactive system can be realized.

在一实施例中,当前互动装置1的主机内设软件程序中可以包括多个虚拟场景和多个虚拟化身,执行控制模块13接收用户的选择指令并根据选择指令控制当前互动装置1的主机显示选取的虚拟场景和虚拟化身。In one embodiment, the built-in software program of the host of the currentinteractive device 1 may include multiple virtual scenes and multiple virtual avatars, and the execution control module 13 receives the user's selection instruction and controls the host of the currentinteractive device 1 to display Selected virtual scene and avatar.

可以预先在当前互动装置的主机中植入场景模式的软件程序,该软件程序可以虚拟出多个虚拟场景和多个虚拟化身,其中,虚拟场景可以是客厅模式(多人竞舞等)、影院模式(多人观影等)、沙滩模式(多人排球等)等,虚拟化身可以是男士或女士(还可以包括多种形象和着装等)。用户在进行多人互动时,可以进入该软件程序选择自己喜欢的虚拟场景和虚拟化身(可以包括自己和其他用户的),在用户选定了虚拟场景和虚拟化身后,当前互动装置的主机显示该选取的虚拟场景和虚拟化身,并且以该虚拟化身在该虚拟场景中进行互动游戏或观影等,从而提高了用户的互动体验感。应当理解,由于不同用户的个人喜好可能不同,且单个互动装置通常是由单个用户独自使用,因此,不同互动装置可以根据对应用户的选择设置为不同的虚拟场景和不同的虚拟化身,以提高互动的多样性。The software program of the scene mode can be implanted in the host computer of the current interactive device in advance, and the software program can virtualize a plurality of virtual scenes and a plurality of virtual avatars, wherein the virtual scene can be a living room mode (multiplayer dance competition, etc.), a theater mode (multiple people watching movies, etc.), beach mode (multiplayer volleyball, etc.), etc., the virtual avatar can be a man or a woman (can also include multiple images and dresses, etc.). When the user is interacting with multiple people, he can enter the software program to choose his favorite virtual scene and virtual avatar (including his own and other users'). After the user has selected the virtual scene and virtual avatar, the host of the current interactive device will display The selected virtual scene and virtual avatar, and use the virtual avatar to play interactive games or watch movies in the virtual scene, thereby improving the user's interactive experience. It should be understood that since different users may have different personal preferences, and a single interactive device is usually used by a single user alone, therefore, different interactive devices can be set to different virtual scenes and different avatars according to the selection of the corresponding user to improve interaction diversity.

在一实施例中,当前互动装置可以是掌机,其他互动装置可以是分离式VR头显,掌机的主机与分离式VR头显的主机为同一个主机,姿态获取模块设置于同一个主机中。In one embodiment, the current interactive device may be a handheld device, and other interactive devices may be separate VR head-mounted displays. The host of the handheld device and the separated VR head-mounted display are the same host, and the attitude acquisition module is set on the same host. middle.

利用同一主机实现掌机和分离式VR头显的互动,即利用主机中的姿态获取模块同时获取掌机用户和VR头显用户的姿态信息,并且根据掌机用户和VR头显用户的姿态信息生成对应的虚拟化身的动作姿态。Use the same host to realize the interaction between the handheld and the separate VR headset, that is, use the gesture acquisition module in the host to simultaneously acquire the gesture information of the handheld user and the VR headset user, and according to the gesture information of the handheld user and the VR headset user Generate the action gesture of the corresponding virtual avatar.

图4是本申请一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。该身体实时追踪控制方法应用于多人互动场景下互动系统中的互动装置中,该互动系统包括多个通讯连接的互动装置;如图4所示,该应用于多人互动场景下的身体实时追踪控制方法包括如下步骤:Fig. 4 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by an exemplary embodiment of the present application. The body real-time tracking control method is applied to an interactive device in an interactive system in a multi-person interactive scene, and the interactive system includes a plurality of communication-connected interactive devices; The tracking control method includes the following steps:

步骤410:获取当前互动装置的第一姿态信息。Step 410: Obtain first gesture information of the current interactive device.

通过在互动系统中的每个互动装置上设置姿态获取模块来实现自身的第一姿态信息的获取。具体的,若当前互动装置为虚拟现实设备,姿态获取模块可以单独设置于虚拟现实设备的头显中(例如设置于头显上的摄像头以获取虚拟现实设备的手柄的图像,并分析该图像以得到手柄的姿态信息),也可以单独设置于虚拟现实设备的手柄中(例如设置于手柄中的激光定位发射器或接收器,通过发射激光信号至标定点或接收标定点发射的激光信号以实现手柄的定位),还可以同时设置于虚拟现实设备的头显和手柄中。若当前互动装置为掌机,该掌机可以包括主机和手柄,手柄和主机分离设置,姿态获取模块设置于主机上录制当前互动装置对应用户的视频数据以获取当前互动装置对应用户的第一姿态信息。Acquisition of the first posture information of itself is realized by setting a posture acquisition module on each interaction device in the interactive system. Specifically, if the current interactive device is a virtual reality device, the posture acquisition module can be separately set in the head display of the virtual reality device (such as a camera on the head display to obtain an image of the handle of the virtual reality device, and analyze the image to obtain the attitude information of the handle), and can also be set separately in the handle of the virtual reality device (such as a laser positioning transmitter or receiver set in the handle, by sending a laser signal to the calibration point or receiving the laser signal emitted by the calibration point to achieve The positioning of the handle), can also be set in the head display and the handle of the virtual reality device at the same time. If the current interactive device is a handheld device, the handheld device can include a host and a handle, the handle and the host are separated, and the attitude acquisition module is set on the host to record the video data of the user corresponding to the current interactive device to obtain the first gesture of the user corresponding to the current interactive device information.

步骤420:将第一姿态信息发送至互动系统中的其他互动装置。Step 420: Send the first gesture information to other interactive devices in the interactive system.

由于互动系统中的多个互动装置之间需要相互发送姿态信息,因此,可以在每个互动装置中设置信息发送模块14,利用信息发送模块14将姿态获取模块11(即当前互动装置的姿态获取模块)实时获取的当前互动装置的姿态信息发送至其他互动装置。通过每个互动装置的信息发送模块14可以实现互动系统中所有互动装置之间的姿态信息的发送,每个互动装置在获取了互动系统中所有互动装置的姿态信息后,执行控制模块根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,第一虚拟化身和第二虚拟化身在当前互动装置的虚拟场景中显示且分别表征当前互动装置对应用户和其他互动装置对应用户。通过将所有互动装置对应的虚拟化身的游戏动作实时展示在每个互动装置的显示界面中,从而可以提高游戏的真实性和可操作性。Since gesture information needs to be sent to each other between a plurality of interactive devices in the interactive system, an information sending module 14 can be set in each interactive device, and the gesture acquisition module 11 (that is, the gesture acquisition of the current interactive device) can be obtained by using the information sending module 14. module) and send the posture information of the current interactive device acquired in real time to other interactive devices. The information transmission module 14 of each interactive device can realize the transmission of gesture information between all interactive devices in the interactive system. After each interactive device has obtained the gesture information of all interactive devices in the interactive system, the execution control module executes the control module according to the first The posture information and the second posture information respectively generate the action posture of the first virtual avatar and the movement posture of the second virtual avatar; wherein, the first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the current interaction Device-to-user and other interactive device-to-user. By displaying the game actions of the avatars corresponding to all the interactive devices on the display interface of each interactive device in real time, the authenticity and operability of the game can be improved.

步骤430:接收互动系统中的其他互动装置对应用户的第二姿态信息。Step 430: Receive second gesture information corresponding to the user of other interactive devices in the interactive system.

通过在当前互动装置上设置信息接收模块,信息接收模块与互动系统中的其他互动装置通讯连接,用于接收表征其他互动装置位置的第二姿态信息。通过获取其他互动装置的第二姿态信息,可以获知其他互动装置的操作动作。By setting the information receiving module on the current interactive device, the information receiving module communicates with other interactive devices in the interactive system to receive the second posture information representing the position of other interactive devices. By acquiring the second posture information of other interactive devices, the operation actions of other interactive devices can be known.

步骤440:根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态。Step 440: Generate an action gesture of the first virtual avatar and an action gesture of the second avatar respectively according to the first gesture information and the second gesture information.

其中,第一虚拟化身和第二虚拟化身在当前互动装置的虚拟场景中显示且分别表征当前互动装置对应用户和其他互动装置对应用户。通过在互动系统中的每个互动装置上设置姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置对应用户的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互,即每个互动装置可以获取互动系统中所有互动装置对应用户的姿态信息。每个互动装置在获取了互动系统中所有互动装置对应用户的姿态信息后,当前互动装置上设置的执行控制模块根据第一姿态信息生成第一虚拟化身的动作姿态。其中,该动作姿态表现为当前互动装置的显示设备中虚拟化身的游戏动作;执行控制模块同时根据第二姿态信息在当前互动装置的显示设备中显示其他互动装置的动作姿态,其中该动作姿态表现为其他互动装置在当前互动装置的显示设备中虚拟化身的游戏动作。通过将其他互动装置和当前互动装置对应的虚拟化身的游戏动作实时展示在当前互动装置的显示设备中,从而可以提高游戏的真实性和可操作性。Wherein, the first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the corresponding user of the current interactive device and the corresponding users of other interactive devices. By setting up a posture acquisition module on each interactive device in the interactive system to obtain its own posture information, and using the information receiving model to obtain the posture information of other interactive devices corresponding to the user, it is possible to realize the interaction between multiple interactive devices in the interactive system. Interaction of gesture information, that is, each interactive device can obtain gesture information of users corresponding to all interactive devices in the interactive system. After each interactive device acquires gesture information corresponding to users of all interactive devices in the interactive system, the execution control module provided on the current interactive device generates an action gesture of the first virtual avatar according to the first gesture information. Wherein, the action gesture represents the game action of the virtual avatar in the display device of the current interactive device; the execution control module simultaneously displays the action gestures of other interactive devices on the display device of the current interactive device according to the second gesture information, wherein the action gesture represents The game action of the virtual avatar in the display device of the current interactive device for other interactive devices. Reality and operability of the game can be improved by displaying the game actions of other interactive devices and virtual avatars corresponding to the current interactive device on the display device of the current interactive device in real time.

本申请提供的一种多人互动场景下的互动方法,通过在互动过程中实时获取当前互动装置的第一姿态信息,并且将第一姿态信息发送至其他互动装置、且接收互动系统中表征其他互动装置位置的第二姿态信息,在获取了第一姿态信息和第二姿态信息后,根据第一姿态信息生成当前互动装置对应的第一虚拟化身的动作姿态、且根据第二姿态信息在当前互动装置的主机中显示其他互动装置对应的第二虚拟化身的动作姿态;即在多个互动装置联合互动时,每个互动装置都获取自身的姿态信息和互动系统中其他互动装置的姿态信息,根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The present application provides an interaction method in a multi-person interaction scene, by obtaining the first posture information of the current interactive device in real time during the interaction process, sending the first posture information to other interactive devices, and receiving the interaction system to represent other For the second gesture information of the position of the interactive device, after the first gesture information and the second gesture information are acquired, the action gesture of the first virtual avatar corresponding to the current interactive device is generated according to the first gesture information, and the action gesture of the first virtual avatar corresponding to the current interactive device is generated according to the second gesture information The host of the interactive device displays the gestures of the second virtual avatars corresponding to other interactive devices; that is, when multiple interactive devices jointly interact, each interactive device obtains its own posture information and the posture information of other interactive devices in the interactive system, According to the first posture information and the second posture information, the action posture of the first virtual avatar and the movement posture of the second virtual avatar are respectively generated to reflect the interaction between the corresponding user of the current interactive device and the corresponding user of other interactive devices, so as to realize multiple people at the same time The interactive display in one scene improves the vividness and experience of multi-person interaction.

图5是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。当前互动装置可以包括主机,姿态获取模块包括摄像头及惯性传感器,摄像头和惯性传感器设置于主机上;其中,如图5所示,上述步骤410可以包括:Fig. 5 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application. The current interactive device can include a host, and the attitude acquisition module includes a camera and an inertial sensor, and the camera and the inertial sensor are arranged on the host; wherein, as shown in Figure 5, theabove step 410 can include:

步骤411:获取当前互动装置对应用户的视频信息及IMU数据信息。Step 411: Obtain the video information and IMU data information of the user corresponding to the current interactive device.

步骤412:分析处理视频信息及IMU数据信息得到第一姿态信息。Step 412: Analyze and process the video information and IMU data information to obtain the first pose information.

当前互动装置包括一个主机和设置在主机上的摄像头、惯性传感器,利用摄像头和惯性传感器分别获取当前互动装置对应用户的视频信息和用户姿态的IMU数据信息,主机对该视频信息和IMU数据信息进行处理后得到当前互动装置的第一姿态信息。The current interactive device includes a host, a camera and an inertial sensor arranged on the host, and the camera and the inertial sensor are used to respectively obtain the video information of the user corresponding to the current interactive device and the IMU data information of the user's posture, and the host performs the video information and IMU data information After processing, the first posture information of the current interactive device is obtained.

图6是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。当前互动装置包括主机,姿态获取模块包括摄像头及惯性传感器,摄像头和惯性传感器设置于主机上;其中,如图6所示,上述步骤410可以包括:Fig. 6 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application. The current interactive device includes a host, and the attitude acquisition module includes a camera and an inertial sensor, and the camera and the inertial sensor are arranged on the host; wherein, as shown in Figure 6, theabove step 410 may include:

步骤413:获取当前互动装置对应用户的视频信息及IMU数据信息。Step 413: Obtain the video information and IMU data information of the user corresponding to the current interactive device.

步骤414:将视频信息及IMU数据信息发送至云端服务器,由云端服务器处理视频信息及IMU数据信息得到第一姿态信息。Step 414: Send the video information and IMU data information to the cloud server, and the cloud server processes the video information and IMU data information to obtain the first attitude information.

步骤415:接收云端服务器发送的第一姿态信息。Step 415: Receive the first gesture information sent by the cloud server.

当前互动装置包括一个主机和设置在主机上的摄像头、惯性传感器,利用摄像头和惯性传感器分别获取当前互动装置对应用户的视频信息和用户姿态的IMU数据信息,摄像头将获取到的视频信息发送至云端服务器,惯性传感器将获取到的IMU数据信息发送至云端服务器,云端服务器对该视频信息和IMU数据信息进行处理后得到当前互动装置的第一姿态信息,云端服务器在得到第一姿态信号后发送至主机上,以降低主机的计算量,从而提高主机的性能。The current interactive device includes a host, a camera and an inertial sensor installed on the host. The camera and the inertial sensor are used to obtain the video information of the current interactive device corresponding to the user and the IMU data information of the user's posture, and the camera sends the acquired video information to the cloud. The server, the inertial sensor sends the obtained IMU data information to the cloud server, and the cloud server processes the video information and IMU data information to obtain the first attitude information of the current interactive device, and the cloud server sends the first attitude signal to Host to reduce the amount of computation on the host, thereby improving the performance of the host.

若当前互动装置为虚拟现实设备,该虚拟现实设备包括VR头显10和手柄20,姿态获取模块11可以设置于头显10和/或手柄20中,即姿态获取模块11可以单独设置于VR头显10中(例如设置于VR头显10上的摄像头以获取手柄20的图像,并分析该图像以得到手柄20的位置信息),也可以单独设置于手柄20中(例如设置于手柄20中的激光定位发射器或接收器,通过发射激光信号至标定点或接收标定点发射的激光信号以实现手柄20的定位),还可以同时设置于VR头显10和手柄20中,应当理解,本申请中姿态获取模块11可以根据实际需求设定其位置,只要能够满足互动游戏的需求即可。用户头戴VR头显10且手握手柄20,用户通过手动控制手柄20(可以包括点击手柄20上的按钮、在手柄20上滑动、移动手柄20等操作)发出控制指令,以实现游戏操控,并且虚拟现实设备的主机接收该控制指令且执行该控制指令,并且将执行该控制指令的结果展示于VR头显10可以观察到的显示界面(可以是实体的显示屏,也可以是虚拟屏幕)上。If the current interactive device is a virtual reality device, the virtual reality device includes aVR head display 10 and ahandle 20, the posture acquisition module 11 can be arranged in thehead display 10 and/or thehandle 20, that is, the posture acquisition module 11 can be separately arranged in the VR head In the display 10 (for example, a camera installed on theVR head display 10 to obtain an image of thehandle 20, and analyze the image to obtain the position information of the handle 20), it can also be separately arranged in the handle 20 (for example, a camera installed in thehandle 20 The laser positioning transmitter or receiver, by sending a laser signal to the calibration point or receiving the laser signal emitted by the calibration point to realize the positioning of the handle 20), can also be set in theVR head display 10 and thehandle 20 at the same time. It should be understood that the present application The position of the middle posture acquisition module 11 can be set according to actual needs, as long as the needs of the interactive game can be met. The user wears the VR head-mounteddisplay 10 and holds thehandle 20 in his hand, and the user issues control commands by manually controlling the handle 20 (which may include clicking a button on thehandle 20, sliding on thehandle 20, moving thehandle 20, etc.) to realize game control. And the host of the virtual reality device receives the control command and executes the control command, and displays the result of executing the control command on the display interface (which can be a physical display screen or a virtual screen) that can be observed by the VR head-mounteddisplay 10 superior.

虚拟现实设备利用姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互。虚拟现实设备在获取了互动系统中所有互动装置的姿态信息后,执行控制模块根据根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The virtual reality device uses the posture acquisition module to obtain its own posture information, and uses the information receiving model to obtain the posture information of other interactive devices, so that the interaction of posture information between multiple interactive devices in the interactive system can be realized. After the virtual reality device acquires the posture information of all the interactive devices in the interactive system, the execution control module generates the action posture of the first virtual avatar and the movement posture of the second virtual avatar according to the first posture information and the second posture information respectively, to Reflecting the interaction between the corresponding user of the current interactive device and the corresponding user of other interactive devices, so as to realize the interactive display of multiple people in the same scene, thereby improving the vividness and experience of the multi-person interaction.

若当前互动装置为掌机,该掌机30可以包括显示屏和操作部件(例如操作杆、按钮、触控屏、手柄等),姿态获取模块11设置于掌机30的主机中,应当理解,本申请中姿态获取模块11可以根据实际需求设定其位置,只要能够满足互动游戏的需求即可。用户通过手动控制操作杆、点击按钮、在触控屏上滑动、移动手柄等操作发出控制指令,以实现游戏操控,并且掌机30的主机接收该控制指令且执行该控制指令,并且将执行该控制指令的结果展示于显示屏上。If the current interactive device is a handheld device, thehandheld device 30 can include a display screen and operating components (such as joysticks, buttons, touch screens, handles, etc.), and the posture acquisition module 11 is arranged in the host computer of thehandheld device 30. It should be understood that In this application, the posture acquisition module 11 can set its position according to actual needs, as long as it can meet the needs of the interactive game. The user issues control commands by manually controlling the joystick, clicking the button, sliding on the touch screen, moving the handle, etc., to achieve game manipulation, and the host of thehandheld device 30 receives and executes the control command, and will execute the control command. The results of the control commands are shown on the display.

掌机利用姿态获取模块以获取自身的姿态信息,并且利用信息接收模型获取其他互动装置的姿态信息,从而可以实现互动系统中的多个互动装置之间姿态信息的交互。The handheld uses the attitude acquisition module to obtain its own attitude information, and uses the information receiving model to acquire the attitude information of other interactive devices, so that the interaction of attitude information between multiple interactive devices in the interactive system can be realized.

图7是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。第一姿态信息包括当前互动装置对应用户的关节处的关键特征点,第二姿态信息包括其他互动装置对应用户的关节处的关键特征点;如图7所述,上述步骤440可以包括:Fig. 7 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application. The first posture information includes key feature points at the joints of the current interactive device corresponding to the user, and the second posture information includes key feature points at the joints of other interactive devices corresponding to the user; as shown in Figure 7, theabove step 440 may include:

步骤441:抓取当前互动装置对应用户的视频信息中当前互动装置对应用户的关节处的关键特征点。Step 441: Capture the key feature points at the joints of the user corresponding to the current interactive device in the video information of the user corresponding to the current interactive device.

步骤442:抓取其他互动装置对应用户的视频信息中其他互动装置对应用户的关节处的关键特征点。Step 442: Capture the key feature points at the joints of the user corresponding to the other interactive device in the video information of the user corresponding to the other interactive device.

步骤443:根据当前互动装置对应用户的关节处的关键特征点追踪当前互动装置对应用户的身体姿势。Step 443: Track the body posture of the user corresponding to the current interactive device according to the key feature points at the joints of the user corresponding to the current interactive device.

步骤444:根据其他互动装置对应用户的关节处的关键特征点追踪其他互动装置对应用户的身体姿势。Step 444: Track the body posture of the user corresponding to the other interactive device according to the key feature points at the joints of the user corresponding to the other interactive device.

步骤445:将当前互动装置对应用户的身体姿势映射至第一虚拟化身。Step 445: Map the body posture of the user corresponding to the current interaction device to the first virtual avatar.

步骤446:将其他互动装置对应用户的身体姿势映射至第二虚拟化身。Step 446: Map the body postures of other interactive devices corresponding to the user to the second avatar.

由于视频信息中的数据过多,若全部处理将花费大量的时间,并且也会影响交互的流畅性和效率。因此,在获取了视频信息后,执行控制模块仅抓取视频信息中对应用户关节处的关键特征点(例如手肘等),执行控制模块根据关节处的关键特征点追踪用户的身体姿势,并将用户的身体姿势映射至第一虚拟化身,即执行控制模块通过提取视频信息中对应用户关节处的关键特征点后,根据该关节处的关键特征点追踪得到用户的身体姿势,然后将用户的身体姿势映射到虚拟化身上,以在虚拟化身上反映出用户的对应姿势,从而提高交互的真实性。Since there is too much data in the video information, it will take a lot of time to process all of them, and it will also affect the fluency and efficiency of the interaction. Therefore, after acquiring the video information, the execution control module only captures the key feature points (such as elbows, etc.) corresponding to the user's joints in the video information, and the execution control module tracks the user's body posture according to the key feature points at the joints, and Map the user's body posture to the first virtual avatar, that is, after the execution control module extracts the key feature points corresponding to the user's joints in the video information, track the user's body posture according to the key feature points at the joints, and then use the user's Body poses are mapped to the avatar to reflect the user's corresponding poses on the avatar, thereby improving the realism of the interaction.

图8是本申请另一示例性实施例提供的一种多人互动场景下的身体实时追踪控制方法的流程示意图。当前互动装置的主机包括多个虚拟场景和多个虚拟化身;其中,如图8所示,上述身体实时追踪控制方法还可以包括:Fig. 8 is a schematic flowchart of a real-time body tracking control method in a multi-person interaction scenario provided by another exemplary embodiment of the present application. The host of the current interactive device includes multiple virtual scenes and multiple virtual avatars; wherein, as shown in Figure 8, the above-mentioned body real-time tracking control method may also include:

步骤450:接收用户的选择指令。Step 450: Receive a user's selection instruction.

其中,选择指令表征用户选取一个虚拟场景和一个虚拟化身的操作信号。可以预先在当前互动装置的主机中植入场景模式的软件程序,该软件程序可以虚拟出多个虚拟场景和多个虚拟化身,其中,虚拟场景可以是客厅模式(多人竞舞等)、影院模式(多人观影等)、沙滩模式(多人排球等)等,虚拟化身可以是男士或女士(还可以包括多种形象和着装等)。用户在进行多人互动时,可以进入该软件程序选择自己喜欢的虚拟场景和虚拟化身(可以包括自己和其他用户的)。Wherein, the selection instruction represents an operation signal for the user to select a virtual scene and a virtual avatar. The software program of the scene mode can be implanted in the host computer of the current interactive device in advance, and the software program can virtualize a plurality of virtual scenes and a plurality of virtual avatars, wherein the virtual scene can be a living room mode (multiplayer dance competition, etc.), a theater mode (multiple people watching movies, etc.), beach mode (multiplayer volleyball, etc.), etc., the virtual avatar can be a man or a woman (can also include multiple images and dresses, etc.). When the user is interacting with multiple people, he can enter the software program to select his favorite virtual scene and virtual avatar (which can include his own and other users').

步骤460:根据选择指令控制当前互动装置的主机显示选取的虚拟场景和虚拟化身。Step 460: Control the host of the current interactive device to display the selected virtual scene and avatar according to the selection instruction.

在用户选定了虚拟场景和虚拟化身后,当前互动装置的主机显示该选取的虚拟场景和虚拟化身,并且以该虚拟化身在该虚拟场景中进行互动游戏或观影等,从而提高了用户的互动体验感。After the user selects the virtual scene and virtual avatar, the host of the current interactive device displays the selected virtual scene and virtual avatar, and uses the virtual avatar to play interactive games or watch movies in the virtual scene, thereby improving the user's experience. Interactive experience.

图9是本申请一示例性实施例提供的一种多人互动场景下的互动系统的结构示意图。如图9所示,该应用于多人互动场景下的互动系统包括多个如上任一项所述的通讯连接的互动装置1;其中,互动装置包括多个掌机、或是多个虚拟现实设备、或是至少一个掌机与至少一个虚拟现实设备。Fig. 9 is a schematic structural diagram of an interaction system in a multi-person interaction scenario provided by an exemplary embodiment of the present application. As shown in Figure 9, the interactive system applied to multi-person interactive scenarios includes a plurality of communication-connectedinteractive devices 1 as described in any one of the above; wherein, the interactive devices include a plurality of handheld devices, or a plurality of virtual reality equipment, or at least one handheld and at least one virtual reality device.

本申请提供的一种多人互动场景下的互动系统,通过在相互通讯连接的多个互动装置的每个互动装置中设置姿态获取模块以在互动过程中实时获取当前互动装置对应用户的第一姿态信息,并且设置与互动系统中的其他互动装置通讯连接的信息接收模块和信息发送模块,以接收其他互动装置对应用户的第二姿态信息和发送第一姿态信息至其他互动装置,另外,设置与姿态获取模块和信息接收模块通讯连接的执行控制模块,在获取了第一姿态信息和第二姿态信息后,执行控制模块根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;即在多个互动装置联合互动时,每个互动装置都获取对应用户的姿态信息和互动系统中其他互动装置对应用户的姿态信息,根据第一姿态信息和第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态,以反映当前互动装置对应用户和其他互动装置对应用户的互动,以实现多人在同一个场景中的互动显示,从而提高了多人互动的生动性和体验感。The present application provides an interactive system in a multi-person interactive scene. By setting a gesture acquisition module in each interactive device of a plurality of interactive devices connected to each other, the first gesture of the user corresponding to the current interactive device can be obtained in real time during the interaction process. Gesture information, and set up an information receiving module and an information sending module that communicate with other interactive devices in the interactive system to receive the second posture information of other interactive devices corresponding to the user and send the first posture information to other interactive devices. In addition, set The execution control module communicates with the posture acquisition module and the information receiving module. After obtaining the first posture information and the second posture information, the execution control module respectively generates the actions of the first virtual avatar according to the first posture information and the second posture information. posture and the action posture of the second virtual avatar; that is, when multiple interactive devices jointly interact, each interactive device obtains the posture information corresponding to the user and the posture information corresponding to the user of other interactive devices in the interactive system, according to the first posture information and The second gesture information generates the action gestures of the first virtual avatar and the second avatar respectively to reflect the interaction between the user corresponding to the current interactive device and the user corresponding to other interactive devices, so as to realize the interactive display of multiple people in the same scene , thereby improving the vividness and experience of multi-person interaction.

在一实施例中,互动装置包括多个掌机,每个掌机包括基座、主机和手柄,其中,主机设置于基座上,手柄和主机分离设置;姿态获取模块设置于主机上,姿态获取模块用于录制当前掌机对应用户的视频数据以获取当前掌机对应用户的第一姿态信息并将其发送至其他掌机同时接收其他掌机的第二姿态信息。In one embodiment, the interactive device includes a plurality of handheld devices, and each handheld device includes a base, a host and a handle, wherein the host is arranged on the base, and the handle and the host are separately arranged; the gesture acquisition module is arranged on the host, and the gesture The acquisition module is used to record the video data of the user corresponding to the current handheld device to obtain the first posture information of the user corresponding to the current handheld device and send it to other handheld devices while receiving the second posture information of other handheld devices.

若互动装置包括多个掌机,每个掌机利用主机上的姿态获取模块获取自身的第一姿态信息,并且利用信息发送模块将第一姿态信息发送至其他掌机,且利用信息接收模型获取其他掌机的第二姿态信息,从而可以实现互动系统中的多个掌机之间姿态信息的交互。If the interactive device includes multiple handhelds, each handheld uses the posture acquisition module on the host to obtain its own first posture information, and uses the information sending module to send the first posture information to other handhelds, and uses the information receiving model to obtain The second gesture information of other handheld devices, so that the interaction of gesture information between multiple handheld devices in the interactive system can be realized.

在一实施例中,互动装置包括至少一个掌机与虚拟现实设备,虚拟现实设备包括VR头显和手柄,姿态获取模块通过对VR头显、手柄进行空间定位以获取虚拟现实设备对应用户的第二姿态信息,并将其发送至掌机同时接收掌机的第一姿态信息。In one embodiment, the interactive device includes at least one handheld device and a virtual reality device, and the virtual reality device includes a VR head-mounted display and a handle. Two posture information, and send it to the handheld device while receiving the first posture information of the handheld device.

若互动装置包括掌机和虚拟现实设备,虚拟现实设备利用姿态获取模块对VR头显、手柄进行空间定位以获取自身的第二姿态信息,并且利用信息发送模块将第二姿态信息发送至掌机,且利用信息接收模型获取掌机的第一姿态信息,从而可以实现掌机和虚拟现实设备之间姿态信息的交互。If the interactive device includes a handheld device and a virtual reality device, the virtual reality device uses the posture acquisition module to spatially locate the VR head display and the handle to obtain its own second posture information, and uses the information sending module to send the second posture information to the handheld device , and use the information receiving model to obtain the first gesture information of the handheld, so that the gesture information interaction between the handheld and the virtual reality device can be realized.

在一实施例中,虚拟现实设备包括分离式VR头显,掌机的主机与分离式VR头显的主机为同一个主机,姿态获取模块设置于同一个主机中,姿态获取模块通过对分离式VR头显、手柄进行空间定位以获取虚拟现实设备对应用户的第二姿态信息,并将其发送至掌机同时接收掌机的第一姿态信息。In one embodiment, the virtual reality device includes a separate VR head display, the host of the handheld device and the host of the separate VR head display are the same host, and the attitude acquisition module is arranged in the same host, and the attitude acquisition module passes through the separate The VR head-mounted display and the handle perform spatial positioning to obtain the second posture information corresponding to the user of the virtual reality device, and send it to the handheld device while receiving the first posture information of the handheld device.

利用同一主机实现掌机和分离式VR头显的互动,即利用主机中的姿态获取模块同时获取掌机用户和VR头显用户的姿态信息,并且根据掌机用户和VR头显用户的姿态信息生成对应的虚拟化身的动作姿态。Use the same host to realize the interaction between the handheld and the separate VR headset, that is, use the gesture acquisition module in the host to simultaneously acquire the gesture information of the handheld user and the VR headset user, and according to the gesture information of the handheld user and the VR headset user Generate the action gesture of the corresponding virtual avatar.

在一实施例中,如图9所示,该互动系统还可以包括:云端服务器2,云端服务器2与多个互动装置1通讯连接,多个互动装置的主机分别处理各自用户的视频信息得到对应用户的姿态信息,云端服务器获取多个互动装置对应用户的姿态信息并交互发送多个互动装置的姿态信息;或者云端服务器与多个互动装置通讯连接,用于获取多个互动装置对应用户的视频信息,云端服务器处理视频信息得到互动装置对应用户姿态信息,云端服务器交互发送多个互动装置对应用户的姿态信息。In one embodiment, as shown in FIG. 9, the interactive system may further include: a cloud server 2, which communicates with a plurality ofinteractive devices 1, and the hosts of the plurality of interactive devices respectively process video information of respective users to obtain corresponding The user's posture information, the cloud server obtains the posture information of the user corresponding to the multiple interactive devices and interactively sends the posture information of the multiple interactive devices; or the cloud server communicates with the multiple interactive devices to obtain the video of the user corresponding to the multiple interactive devices information, the cloud server processes the video information to obtain gesture information corresponding to the user of the interactive device, and the cloud server interactively sends gesture information corresponding to the user of multiple interactive devices.

具体的,多个互动装置1通过websocke技术与云端服务器2无线通讯连接。WebSocket是一种在单个TCP连接上进行全双工通信的协议。WebSocket使得客户端和服务器之间的数据交换变得更加简单,允许服务端主动向客户端推送数据。在WebSocket API中,浏览器和服务器只需要完成一次握手,两者之间就直接可以创建持久性的连接,并进行双向数据传输。Specifically, a plurality ofinteractive devices 1 are wirelessly connected to the cloud server 2 through websocke technology. WebSocket is a protocol for full-duplex communication over a single TCP connection. WebSocket makes the data exchange between the client and the server easier, allowing the server to actively push data to the client. In the WebSocket API, the browser and the server only need to complete a handshake, and a persistent connection can be created directly between the two, and two-way data transmission can be performed.

多个互动装置1组建形成一个互动系统后,该互动系统中的每个互动装置1与云端服务器2建立长链接,以实现实时通讯传输数据。多个互动装置1在互动过程中,多个互动装置1分别通过信息发送模块或无线通讯模块将自身的视频信息上传至云端服务器2的信息收集模块中,云端服务器2的数据处理模块对接收到的视频信息进行处理后得到姿态信息,再将姿态信息分别发送至各个互动装置1;或者多个互动装置1通过自身的主机处理视频信息得到姿态信息后,分别通过信息发送模块或无线通讯模块将自身的姿态信息上传至云端服务器2的信息收集模块中,云端服务器2将姿态信息分别发送至各个互动装置1。具体的,数据处理模块对接收到的姿态信息进行分类、标记、格式转换等处理后再分别发送至各个互动装置1中,其中,发送至当前互动装置的数据为其他互动装置的姿态信息,从而保证每个互动装置1可以通过云端服务器2获取其他互动装置的姿态信息。After a plurality ofinteractive devices 1 are set up to form an interactive system, eachinteractive device 1 in the interactive system establishes a long link with the cloud server 2 to realize real-time communication and data transmission. During the interactive process of a plurality ofinteractive devices 1, a plurality ofinteractive devices 1 upload their own video information to the information collection module of the cloud server 2 respectively through the information sending module or the wireless communication module, and the data processing module of the cloud server 2 receives the After processing the video information, the attitude information is obtained, and then the attitude information is sent to eachinteractive device 1 respectively; or after multipleinteractive devices 1 process the video information through their own hosts to obtain the attitude information, they respectively pass the information sending module or the wireless communication module. The posture information of oneself is uploaded to the information collection module of the cloud server 2, and the cloud server 2 sends the posture information to eachinteractive device 1 respectively. Specifically, the data processing module classifies, marks, and converts the received posture information and then sends them to eachinteractive device 1 respectively, wherein the data sent to the current interactive device is the posture information of other interactive devices, so that It is ensured that eachinteractive device 1 can obtain posture information of other interactive devices through the cloud server 2 .

下面,参考图10来描述根据本申请实施例的电子设备。该电子设备可以是第一设备和第二设备中的任一个或两者、或与它们独立的单机设备,该单机设备可以与第一设备和第二设备进行通信,以从它们接收所采集到的输入信号。Next, an electronic device according to an embodiment of the present application will be described with reference to FIG. 10 . The electronic device may be either or both of the first device and the second device, or a stand-alone device independent of them, and the stand-alone device may communicate with the first device and the second device to receive collected data from them. input signal.

图10图示了根据本申请实施例的电子设备的框图。FIG. 10 illustrates a block diagram of an electronic device according to an embodiment of the present application.

如图10所示,电子设备10包括一个或多个处理器11和存储器12。As shown in FIG. 10 , anelectronic device 10 includes one or more processors 11 and a memory 12 .

处理器11可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备10中的其他组件以执行期望的功能。Processor 11 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components inelectronic device 10 to perform desired functions.

存储器12可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的多人互动场景下的身体实时追踪控制方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。Memory 12 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache). The non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like. One or more computer program instructions can be stored on the computer-readable storage medium, and the processor 11 can execute the program instructions to realize the above-mentioned multi-person interaction scenarios of the various embodiments of the present application. Real-time tracking control methods and/or other desired functions. Various contents such as input signal, signal component, noise component, etc. may also be stored in the computer-readable storage medium.

在一个示例中,电子设备10还可以包括:输入装置13和输出装置14,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。In one example, theelectronic device 10 may further include: an input device 13 and an output device 14, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).

在该电子设备是单机设备时,该输入装置13可以是通信网络连接器,用于从第一设备和第二设备接收所采集的输入信号。When the electronic device is a stand-alone device, the input device 13 may be a communication network connector for receiving collected input signals from the first device and the second device.

此外,该输入设备13还可以包括例如键盘、鼠标等等。In addition, the input device 13 may also include, for example, a keyboard, a mouse, and the like.

该输出装置14可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出设备14可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。The output device 14 can output various information to the outside, including determined distance information, direction information, and the like. The output device 14 may include, for example, a display, a speaker, a printer, a communication network and its connected remote output devices, and the like.

当然,为了简化,图10中仅示出了该电子设备10中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备10还可以包括任何其他适当的组件。Of course, for the sake of simplicity, only some of the components related to the present application in theelectronic device 10 are shown in FIG. 10 , and components such as bus, input/output interface, etc. are omitted. In addition, according to specific application conditions, theelectronic device 10 may also include any other suitable components.

除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的多人互动场景下的身体实时追踪控制方法中的步骤。In addition to the above-mentioned methods and devices, embodiments of the present application may also be computer program products, which include computer program instructions that, when executed by a processor, cause the processor to perform the above-mentioned "exemplary method" of this specification. The steps in the body real-time tracking control method in a multi-person interaction scene according to various embodiments described in the section.

所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。The computer program product can be written in any combination of one or more programming languages to execute the program codes for performing the operations of the embodiments of the present application, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.

此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的多人互动场景下的身体实时追踪控制方法中的步骤。In addition, the embodiments of the present application may also be a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the processor executes the above-mentioned "Exemplary Method" section of this specification. The steps in the body real-time tracking control method in the multi-person interaction scene according to various embodiments described in the present application.

所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, but not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。The basic principles of the present application have been described above in conjunction with specific embodiments, but it should be pointed out that the advantages, advantages, effects, etc. mentioned in the application are only examples rather than limitations, and these advantages, advantages, effects, etc. Various embodiments of this application must have. In addition, the specific details disclosed above are only for the purpose of illustration and understanding, rather than limitation, and the above details do not limit the application to be implemented by using the above specific details.

本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of devices, devices, devices, and systems involved in this application are only illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As will be appreciated by those skilled in the art, these devices, devices, devices, systems may be connected, arranged, configured in any manner. Words such as "including", "comprising", "having" and the like are open-ended words meaning "including but not limited to" and may be used interchangeably therewith. As used herein, the words "or" and "and" refer to the word "and/or" and are used interchangeably therewith, unless the context clearly dictates otherwise. As used herein, the word "such as" refers to the phrase "such as but not limited to" and can be used interchangeably therewith.

还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。It should also be pointed out that in the devices, equipment and methods of the present application, each component or each step can be decomposed and/or reassembled. These decompositions and/or recombinations should be considered equivalents of this application.

提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.

Claims (21)

Translated fromChinese
1.一种应用于多人互动场景下的互动装置,其特征在于,应用于互动系统中,所述互动系统包括多个通讯连接的所述互动装置;所述互动装置包括:1. An interactive device applied to a multi-person interactive scene, characterized in that it is applied in an interactive system, and the interactive system includes a plurality of interactive devices connected by communication; the interactive device includes:姿态获取模块,所述姿态获取模块用于获取当前互动装置对应用户的第一姿态信息;A gesture acquisition module, the gesture acquisition module is used to acquire the first gesture information of the user corresponding to the current interactive device;信息发送模块,所述信息发送模块与所述姿态获取模块通讯连接,用于将所述第一姿态信息发送至所述其他互动装置;An information sending module, the information sending module is communicatively connected with the attitude acquisition module, and is used to send the first attitude information to the other interactive devices;信息接收模块,所述信息接收模块与所述互动系统中的其他互动装置通讯连接,用于接收所述其他互动装置对应用户的第二姿态信息;以及An information receiving module, the information receiving module communicates with other interactive devices in the interactive system, and is used to receive the second gesture information of the user corresponding to the other interactive devices; and执行控制模块,所述执行控制模块与所述姿态获取模块和所述信息接收模块通讯连接,用于根据所述第一姿态信息和所述第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,所述第一虚拟化身和所述第二虚拟化身在所述当前互动装置的虚拟场景中显示且分别表征所述当前互动装置对应用户和所述其他互动装置对应用户。an execution control module, the execution control module communicates with the posture acquisition module and the information receiving module, and is used to generate the action posture and the action posture of the first virtual avatar according to the first posture information and the second posture information respectively; The action gesture of the second virtual avatar; wherein, the first virtual avatar and the second virtual avatar are displayed in the virtual scene of the current interactive device and respectively represent the corresponding user of the current interactive device and the other interactive devices corresponding to the user.2.根据权利要求1所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置包括主机,所述姿态获取模块包括摄像头及惯性传感器,所述摄像头和所述惯性传感器设置于所述主机上,所述摄像头获取所述当前互动装置对应用户的视频信息,所述惯性传感器用来获得用户姿态的IMU数据信息,所述摄像头将所述视频信息传输至所述主机,所述惯性传感器将所述IMU数据信息传输至所述主机,所述主机处理所述视频信息及所述IMU数据信息得到所述第一姿态信息。2. The interactive device applied in a multi-person interactive scene according to claim 1, wherein the current interactive device includes a host, the attitude acquisition module includes a camera and an inertial sensor, and the camera and the inertial The sensor is set on the host, the camera acquires the video information of the user corresponding to the current interactive device, the inertial sensor is used to obtain the IMU data information of the user's posture, and the camera transmits the video information to the host The inertial sensor transmits the IMU data information to the host, and the host processes the video information and the IMU data information to obtain the first attitude information.3.根据权利要求1所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置包括主机,所述姿态获取模块包括摄像头及惯性传感器,所述摄像头和所述惯性传感器设置于所述主机上,所述摄像头获取所述当前互动装置对应用户的视频信息,所述惯性传感器用来获得用户姿态的IMU数据信息,所述摄像头将所述视频信息发送至云端服务器,所述惯性传感器将所述IMU数据信息发送至所述云端服务器,所述云端服务器处理所述视频信息及所述IMU数据信息得到所述第一姿态信息,所述云端服务器将所述第一姿态信息发送至所述主机。3. The interactive device applied in a multi-person interactive scene according to claim 1, wherein the current interactive device includes a host, the attitude acquisition module includes a camera and an inertial sensor, and the camera and the inertial sensor The sensor is arranged on the host, the camera obtains the video information of the user corresponding to the current interactive device, the inertial sensor is used to obtain the IMU data information of the user's posture, and the camera sends the video information to the cloud server, The inertial sensor sends the IMU data information to the cloud server, the cloud server processes the video information and the IMU data information to obtain the first posture information, and the cloud server sends the first posture information Information is sent to the host.4.根据权利要求2或3所述的应用于多人互动场景下的互动装置,其特征在于,所述第一姿态信息包括所述当前互动装置对应用户的关节处的关键特征点,所述第二姿态信息包括所述其他互动装置对应用户的关节处的关键特征点;所述执行控制模块抓取所述当前互动装置对应用户的视频信息中所述当前互动装置对应用户的关节处的关键特征点,所述执行控制模块抓取所述其他互动装置对应用户的视频信息中所述其他互动装置对应用户的关节处的关键特征点,所述执行控制模块根据所述当前互动装置对应用户的关节处的关键特征点追踪所述当前互动装置对应用户的身体姿势,并将所述当前互动装置对应用户的身体姿势映射至所述第一虚拟化身,所述执行控制模块根据所述其他互动装置对应用户的关节处的关键特征点追踪所述其他互动装置对应用户的身体姿势,并将所述其他互动装置对应用户的身体姿势映射至所述第二虚拟化身。4. The interactive device applied in a multi-person interactive scene according to claim 2 or 3, wherein the first gesture information includes the key feature points at the joints of the user corresponding to the current interactive device, and the The second posture information includes the key feature points at the joints of the user corresponding to the other interactive device; the execution control module captures the key points at the joints of the user corresponding to the current interactive device in the video information feature points, the execution control module captures the key feature points at the joints of the user corresponding to the other interactive device in the video information corresponding to the user of the other interactive device, and the execution control module according to the current interactive device corresponding to the user's The key feature points at the joints track the body posture of the user corresponding to the current interaction device, and map the body posture of the user corresponding to the current interaction device to the first avatar, and the execution control module according to the other interaction devices The key feature points corresponding to the joints of the user track the body posture of the user corresponding to the other interactive device, and map the body posture of the user corresponding to the other interactive device to the second avatar.5.根据权利要求1-3中任一项所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置的主机内设软件程序中包括多个虚拟场景和多个虚拟化身,所述执行控制模块接收用户的选择指令并根据所述选择指令控制所述当前互动装置显示选取的所述虚拟场景和所述虚拟化身。5. The interactive device applied in a multi-person interactive scene according to any one of claims 1-3, wherein the built-in software program of the host of the current interactive device includes multiple virtual scenes and multiple For the avatar, the execution control module receives a user's selection instruction and controls the current interactive device to display the selected virtual scene and the avatar according to the selection instruction.6.根据权利要求1-3中任一项所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置包括掌机,所述掌机包括基座、主机和手柄,其中,所述主机设置于所述基座上,所述手柄和所述主机分离设置;所述姿态获取模块设置于所述主机上,所述姿态获取模块用于录制所述当前互动装置对应用户的视频数据以获取所述当前互动装置对应用户的第一姿态信息。6. The interactive device applied in a multi-person interactive scene according to any one of claims 1-3, wherein the current interactive device includes a handheld device, and the handheld device includes a base, a host and a handle , wherein, the host is set on the base, and the handle is set separately from the host; the attitude acquisition module is set on the host, and the attitude acquisition module is used to record the current interactive device corresponding The video data of the user is used to obtain the first gesture information of the user corresponding to the current interactive device.7.根据权利要求1-3中任一项所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置包括虚拟现实设备,所述虚拟现实设备包括VR头显和手柄,所述姿态获取模块通过对所述VR头显、所述手柄进行空间定位以获取所述当前互动装置对应用户的第一姿态信息。7. The interactive device applied in a multi-person interactive scene according to any one of claims 1-3, wherein the current interactive device includes a virtual reality device, and the virtual reality device includes a VR head-mounted display and a The handle, the posture acquisition module obtains the first posture information of the user corresponding to the current interactive device by spatially positioning the VR head-mounted display and the handle.8.根据权利要求1-3中任一项所述的应用于多人互动场景下的互动装置,其特征在于,所述当前互动装置包括掌机,所述其他互动装置包括分离式VR头显,所述掌机的主机与所述分离式VR头显的主机为同一个主机,所述姿态获取模块设置于所述同一个主机中。8. The interactive device applied in a multi-person interactive scene according to any one of claims 1-3, wherein the current interactive device includes a handheld device, and the other interactive devices include a separate VR head-mounted display , the host of the handheld device and the host of the separate VR head-mounted display are the same host, and the attitude acquisition module is set in the same host.9.一种应用于多人互动场景下的身体实时追踪控制方法,其特征在于,应用于多人互动场景下互动系统中的互动装置中,所述互动系统包括多个通讯连接的所述互动装置;所述应用于多人互动场景下的身体实时追踪控制方法包括:9. A real-time body tracking control method applied to a multi-person interaction scene, characterized in that it is applied to an interactive device in an interactive system under a multi-person interaction scene, and the interactive system includes a plurality of communication-connected interactive devices. device; the body real-time tracking control method applied in a multi-person interaction scene includes:获取当前互动装置对应用户的第一姿态信息;Obtain the first gesture information of the user corresponding to the current interactive device;将所述第一姿态信息发送至所述互动系统中的其他互动装置;sending the first gesture information to other interactive devices in the interactive system;接收所述其他互动装置对应用户的第二姿态信息;以及receiving second gesture information of the user corresponding to the other interactive device; and根据所述第一姿态信息和所述第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态;其中,所述第一虚拟化身和所述第二虚拟化身在所述当前互动装置的虚拟场景中显示且分别表征所述当前互动装置对应用户和所述其他互动装置对应用户。According to the first posture information and the second posture information, the action posture of the first virtual avatar and the movement posture of the second virtual avatar are respectively generated; wherein, the first virtual avatar and the second virtual avatar are in the The user corresponding to the current interactive device and the user corresponding to the other interactive devices are displayed in the virtual scene of the current interactive device and represent respectively.10.根据权利要求9所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,10. The body real-time tracking control method applied in a multi-person interaction scene according to claim 9, characterized in that,所述获取当前互动装置对应用户的第一姿态信息包括:The acquisition of the first gesture information of the user corresponding to the current interactive device includes:获取所述当前互动装置对应用户的视频信息及IMU数据信息;以及Obtain video information and IMU data information of the user corresponding to the current interactive device; and分析处理所述视频信息及所述IMU数据信息得到所述第一姿态信息。Analyzing and processing the video information and the IMU data information to obtain the first attitude information.11.根据权利要求10所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,所述获取当前互动装置对应用户的第一姿态信息包括:11. The body real-time tracking control method applied in a multi-person interaction scene according to claim 10, wherein said obtaining the first posture information of the user corresponding to the current interaction device comprises:获取所述当前互动装置对应用户的视频信息及IMU数据信息;Obtain video information and IMU data information of the user corresponding to the current interactive device;将所述视频信息及所述IMU数据信息发送至云端服务器,由所述云端服务器处理所述视频信息及所述IMU数据信息得到所述第一姿态信息;以及Sending the video information and the IMU data information to a cloud server, and processing the video information and the IMU data information by the cloud server to obtain the first attitude information; and接收所述云端服务器发送的所述第一姿态信息。receiving the first gesture information sent by the cloud server.12.根据权利要求10或11所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,所述第一姿态信息包括所述当前互动装置对应用户的关节处的关键特征点,所述第二姿态信息包括所述其他互动装置对应用户的关节处的关键特征点;12. The body real-time tracking control method applied in a multi-person interaction scene according to claim 10 or 11, wherein the first posture information includes the key feature points at the joints of the user corresponding to the current interaction device , the second posture information includes the key feature points at the joints of the user corresponding to the other interactive device;其中,所述根据所述第一姿态信息和所述第二姿态信息分别生成第一虚拟化身的动作姿态和第二虚拟化身的动作姿态包括:Wherein, said generating the action gesture of the first virtual avatar and the action gesture of the second virtual avatar respectively according to the first posture information and the second posture information includes:抓取所述当前互动装置对应用户的视频信息中所述当前互动装置对应用户的关节处的关键特征点;Grab the key feature points at the joints of the user corresponding to the current interactive device in the video information of the user corresponding to the current interactive device;抓取所述其他互动装置对应用户的视频信息中所述其他互动装置对应用户的关节处的关键特征点;Grab the key feature points at the joints of the user corresponding to the other interactive device in the video information of the user corresponding to the other interactive device;根据所述当前互动装置对应用户的关节处的关键特征点追踪所述当前互动装置对应用户的身体姿势;Tracking the body posture of the user corresponding to the current interaction device according to the key feature points at the joints of the user corresponding to the current interaction device;根据所述其他互动装置对应用户的关节处的关键特征点追踪所述其他互动装置对应用户的身体姿势;Tracking the body posture of the user corresponding to the other interactive device according to the key feature points at the joints of the user corresponding to the other interactive device;将所述当前互动装置对应用户的身体姿势映射至所述第一虚拟化身;以及mapping a body gesture of a user corresponding to the current interaction device to the first avatar; and将所述其他互动装置对应用户的身体姿势映射至所述第二虚拟化身。and mapping the body posture of the user corresponding to the other interactive device to the second virtual avatar.13.根据权利要求9-11中任一项所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,所述当前互动装置的主机内设软件程序中包括多个虚拟场景和多个虚拟化身;13. The body real-time tracking control method applied in a multi-person interactive scene according to any one of claims 9-11, characterized in that, the built-in software program of the host of the current interactive device includes a plurality of virtual scenes and multiple avatars;其中,所述互动方法还包括:Wherein, the interactive method also includes:接收用户的选择指令;其中,所述选择指令表征所述用户选取一个虚拟场景和一个虚拟化身的操作信号;以及receiving a user's selection instruction; wherein, the selection instruction represents an operation signal for the user to select a virtual scene and an avatar; and根据所述选择指令控制所述当前互动装置显示选取的所述虚拟场景和所述虚拟化身。and controlling the current interactive device to display the selected virtual scene and virtual avatar according to the selection instruction.14.根据权利要求9-11中任一项所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,所述当前互动装置包括掌机,所述掌机包括基座、主机和手柄,所述主机设置于所述基座上,所述手柄和所述主机分离设置;其中,所述获取当前互动装置对应用户的第一姿态信息包括:14. The body real-time tracking control method applied in a multi-person interaction scene according to any one of claims 9-11, wherein the current interaction device includes a handheld device, and the handheld device includes a base, A host and a handle, the host is arranged on the base, and the handle is set separately from the host; wherein the acquisition of the first gesture information corresponding to the user of the current interactive device includes:录制所述当前互动装置对应用户的视频数据以获取所述当前互动装置对应用户的第一姿态信息。Recording video data of the user corresponding to the current interactive device to obtain first gesture information of the user corresponding to the current interactive device.15.根据权利要求9-11中任一项所述的应用于多人互动场景下的身体实时追踪控制方法,其特征在于,所述当前互动装置包括虚拟现实设备,所述虚拟现实设备包括VR头显和手柄;其中,所述获取当前互动装置对应用户的第一姿态信息包括:15. The body real-time tracking control method applied in a multi-person interaction scene according to any one of claims 9-11, wherein the current interaction device includes a virtual reality device, and the virtual reality device includes a VR A head-mounted display and a handle; wherein, the acquisition of the first gesture information corresponding to the user of the current interactive device includes:通过对所述VR头显、所述手柄进行空间定位以获取所述当前互动装置对应用户的第一姿态信息。The first gesture information of the user corresponding to the current interactive device is obtained by spatially positioning the VR head-mounted display and the handle.16.一种应用于多人互动场景下的互动系统,其特征在于,包括多个如权利要求1-8中任一项所述且相互通讯连接的互动装置;其中,所述互动装置包括多个掌机、或是多个虚拟现实设备、或是至少一个掌机与至少一个虚拟现实设备。16. An interactive system applied to a multi-person interactive scene, characterized in that it includes a plurality of interactive devices as described in any one of claims 1-8 and connected to each other through communication; wherein, the interactive devices include multiple A handheld, or multiple virtual reality devices, or at least one handheld and at least one virtual reality device.17.根据权利要求16所述的应用于多人互动场景下的互动系统,其特征在于,所述互动装置包括多个掌机,每个所述掌机包括基座、主机和手柄,其中,所述主机设置于所述基座上,所述手柄和所述主机分离设置;所述姿态获取模块设置于所述主机上,所述姿态获取模块用于录制所述当前掌机对应用户的视频数据以获取所述当前掌机对应用户的第一姿态信息并将其发送至其他掌机同时接收其他掌机的第二姿态信息。17. The interactive system applied in a multi-person interactive scene according to claim 16, wherein the interactive device includes a plurality of handheld devices, each of which includes a base, a host and a handle, wherein, The host is arranged on the base, and the handle is set separately from the host; the attitude acquisition module is arranged on the host, and the attitude acquisition module is used to record the video of the user corresponding to the current handheld Data to obtain the first gesture information of the user corresponding to the current handheld and send it to other handhelds while receiving the second gesture information of other handhelds.18.根据权利要求16所述的应用于多人互动场景下的互动系统,其特征在于,所述互动装置包括至少一个掌机与虚拟现实设备,所述虚拟现实设备包括VR头显和手柄,所述姿态获取模块通过对所述VR头显、所述手柄进行空间定位以获取所述虚拟现实设备对应用户的第二姿态信息,并将其发送至所述掌机同时接收所述掌机的第一姿态信息。18. The interactive system applied in a multi-person interactive scene according to claim 16, wherein the interactive device includes at least one handheld device and a virtual reality device, and the virtual reality device includes a VR head-mounted display and a handle, The posture acquisition module obtains the second posture information of the corresponding user of the virtual reality device by spatially positioning the VR head-mounted display and the handle, and sends it to the handheld device while receiving the second posture information of the handheld device. First pose information.19.根据权利要求18所述的应用于多人互动场景下的互动系统,其特征在于,所述虚拟现实设备包括分离式VR头显,所述掌机的主机与所述分离式VR头显的主机为同一个主机,所述姿态获取模块设置于所述同一个主机中,所述姿态获取模块通过对所述分离式VR头显、所述手柄进行空间定位以获取所述虚拟现实设备对应用户的第二姿态信息,并将其发送至所述掌机同时接收所述掌机的第一姿态信息。19. The interactive system applied in a multi-person interaction scene according to claim 18, wherein the virtual reality device comprises a separate VR head display, and the host of the handheld device and the separate VR head display The host is the same host, and the attitude acquisition module is set in the same host, and the attitude acquisition module obtains the corresponding position of the virtual reality device by spatially positioning the separate VR head display and the handle. The second posture information of the user is sent to the handheld device while receiving the first posture information of the handheld device.20.根据权利要求16所述的应用于多人互动场景下的互动系统,其特征在于,所述互动系统还包括:20. The interactive system applied in a multi-person interactive scene according to claim 16, wherein the interactive system further comprises:云端服务器,所述云端服务器与多个所述互动装置通讯连接,多个所述互动装置的主机分别处理各自用户的视频信息得到对应用户的姿态信息,所述云端服务器获取多个所述互动装置对应用户的姿态信息并交互发送多个所述互动装置的姿态信息。A cloud server, the cloud server communicates with a plurality of the interactive devices, the hosts of the plurality of interactive devices respectively process the video information of their respective users to obtain the posture information of the corresponding users, and the cloud server obtains the plurality of interactive devices Corresponding to the gesture information of the user and interactively sending the gesture information of a plurality of said interactive devices.21.根据权利要求16所述的应用于多人互动场景下的互动系统,其特征在于,所述互动系统还包括:21. The interactive system applied in a multi-person interactive scene according to claim 16, wherein the interactive system further comprises:云端服务器,所述云端服务器与多个所述互动装置通讯连接,用于获取多个所述互动装置对应用户的视频信息,所述云端服务器处理所述视频信息得到所述互动装置对应用户姿态信息,所述云端服务器交互发送多个所述互动装置对应用户的姿态信息。A cloud server, the cloud server communicates with a plurality of the interactive devices, and is used to obtain video information corresponding to users of the plurality of interactive devices, and the cloud server processes the video information to obtain gesture information of users corresponding to the interactive devices The cloud server interactively sends gesture information corresponding to users of the plurality of interactive devices.
CN202211213877.XA2022-09-302022-09-30Interaction device, control method and interaction system applied to multi-person interaction scenePendingCN115463413A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211213877.XACN115463413A (en)2022-09-302022-09-30Interaction device, control method and interaction system applied to multi-person interaction scene

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211213877.XACN115463413A (en)2022-09-302022-09-30Interaction device, control method and interaction system applied to multi-person interaction scene

Publications (1)

Publication NumberPublication Date
CN115463413Atrue CN115463413A (en)2022-12-13

Family

ID=84335158

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211213877.XAPendingCN115463413A (en)2022-09-302022-09-30Interaction device, control method and interaction system applied to multi-person interaction scene

Country Status (1)

CountryLink
CN (1)CN115463413A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104616028A (en)*2014-10-142015-05-13北京中科盘古科技发展有限公司Method for recognizing posture and action of human limbs based on space division study
CN105188516A (en)*2013-03-112015-12-23奇跃公司System and method for augmented and virtual reality
CN106534125A (en)*2016-11-112017-03-22厦门汇鑫元软件有限公司Method for realizing VR multi-person interaction system on the basis of local area network
CN207586859U (en)*2018-01-052018-07-06重庆创通联达智能技术有限公司A kind of system for supporting multi-person interactive
CN109375764A (en)*2018-08-282019-02-22北京凌宇智控科技有限公司A kind of head-mounted display, cloud server, VR system and data processing method
CN113093910A (en)*2021-04-082021-07-09中国工商银行股份有限公司Interaction method and interaction device based on VR scene, electronic device and storage medium
CN113610018A (en)*2021-08-112021-11-05暨南大学VR real-time communication interactive system and method combining 5G, expression tracking and beautifying

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105188516A (en)*2013-03-112015-12-23奇跃公司System and method for augmented and virtual reality
CN104616028A (en)*2014-10-142015-05-13北京中科盘古科技发展有限公司Method for recognizing posture and action of human limbs based on space division study
CN106534125A (en)*2016-11-112017-03-22厦门汇鑫元软件有限公司Method for realizing VR multi-person interaction system on the basis of local area network
CN207586859U (en)*2018-01-052018-07-06重庆创通联达智能技术有限公司A kind of system for supporting multi-person interactive
CN109375764A (en)*2018-08-282019-02-22北京凌宇智控科技有限公司A kind of head-mounted display, cloud server, VR system and data processing method
CN113093910A (en)*2021-04-082021-07-09中国工商银行股份有限公司Interaction method and interaction device based on VR scene, electronic device and storage medium
CN113610018A (en)*2021-08-112021-11-05暨南大学VR real-time communication interactive system and method combining 5G, expression tracking and beautifying

Similar Documents

PublicationPublication DateTitle
JP6982215B2 (en) Rendering virtual hand poses based on detected manual input
JP7366196B2 (en) Widespread simultaneous remote digital presentation world
US12134037B2 (en)Method and system for directing user attention to a location based game play companion application
US11145125B1 (en)Communication protocol for streaming mixed-reality environments between multiple devices
KR101855639B1 (en)Camera navigation for presentations
US9947139B2 (en)Method and apparatus for providing hybrid reality environment
US8419545B2 (en)Method and system for controlling movements of objects in a videogame
US20180373413A1 (en)Information processing method and apparatus, and program for executing the information processing method on computer
US20150070274A1 (en)Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
JP7503122B2 (en) Method and system for directing user attention to a location-based gameplay companion application - Patents.com
GB2556347A (en)Virtual reality
EP2243525A2 (en)Method and system for creating a shared game space for a networked game
CN109069934A (en)Spectators' view tracking to the VR user in reality environment (VR)
US20190043263A1 (en)Program executed on a computer for providing vertual space, method and information processing apparatus for executing the program
CN106873767A (en)The progress control method and device of a kind of virtual reality applications
US20220323862A1 (en)Program, method, and information processing terminal
CN114020978B (en)Park digital roaming display method and system based on multi-source information fusion
CN114053693A (en)Object control method and device in virtual scene and terminal equipment
KR20220105354A (en)Method and system for providing educational contents experience service based on Augmented Reality
CN115463413A (en)Interaction device, control method and interaction system applied to multi-person interaction scene
CN115624740A (en)Virtual reality equipment, control method, device and system thereof, and interaction system
PerlDistributed Multi-User VR With Full-Body Avatars
JP7095006B2 (en) Game programs, character control programs, methods, and information processing equipment
US20240367060A1 (en)Systems and methods for enabling communication between users
TummalapallyAugment the Multi-Modal Interaction Capabilities of HoloLens Through Networked Sensory Devices

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Country or region after:China

Address after:Room 208-2, Building 1, 1818-1 Wenyi West Road, Yuhang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311100

Applicant after:Xiaopai Technology (Hangzhou) Co.,Ltd.

Address before:Room 615, Block A, Building 1, No. 3000 Longdong Avenue, Pudong New Area, Shanghai

Applicant before:PIMAX TECHNOLOGY (SHANGHAI) Co.,Ltd.

Country or region before:China

CB02Change of applicant information
RJ01Rejection of invention patent application after publication

Application publication date:20221213

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp