Movatterモバイル変換


[0]ホーム

URL:


CN106303555B - A live broadcast method, device and system based on mixed reality - Google Patents

A live broadcast method, device and system based on mixed reality
Download PDF

Info

Publication number
CN106303555B
CN106303555BCN201610639734.3ACN201610639734ACN106303555BCN 106303555 BCN106303555 BCN 106303555BCN 201610639734 ACN201610639734 ACN 201610639734ACN 106303555 BCN106303555 BCN 106303555B
Authority
CN
China
Prior art keywords
data
image
video data
dimensional
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610639734.3A
Other languages
Chinese (zh)
Other versions
CN106303555A (en
Inventor
周苑龙
秦凯
熊飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Modern Century Technology Co ltd
Original Assignee
Shenzhen Morden Century Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Morden Century Science And Technology Co LtdfiledCriticalShenzhen Morden Century Science And Technology Co Ltd
Priority to CN201610639734.3ApriorityCriticalpatent/CN106303555B/en
Publication of CN106303555ApublicationCriticalpatent/CN106303555A/en
Application grantedgrantedCritical
Publication of CN106303555BpublicationCriticalpatent/CN106303555B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种基于混合现实的直播方法,所述方法包括:获取由现场数据采集端采集的视频数据和音频数据;根据所述视频数据,生成与所述视频数据匹配的三维场景图像;在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。本发明使得用户能够获取更为丰富的现场数据,从而能够创造更佳的直播氛围。

The present invention provides a live broadcast method based on mixed reality, the method comprising: acquiring video data and audio data collected by a field data acquisition terminal; generating a three-dimensional scene image matching the video data according to the video data; The collected video data is played in the 3D scene image, and the sceneized audio data is played according to the position of the user in the 3D scene. The invention enables users to obtain richer on-site data, thereby creating a better live broadcast atmosphere.

Description

Translated fromChinese
一种基于混合现实的直播方法、装置和系统A live broadcast method, device and system based on mixed reality

技术领域technical field

本发明属于互联网领域,尤其涉及一种基于混合现实的直播方法、装置和系统。The invention belongs to the field of the Internet, and in particular relates to a live broadcast method, device and system based on mixed reality.

背景技术Background technique

随着网络通信技术的不断发展,数据传输的方式也越来越多样化。比如智能终端,可以通过移动通信网络(3G、4G等),或者WIFI网络,或者有线网络高速的进行数据传输。在传输速度发展的同时,视频内容直播的方式,由电视直播方式的基础上,增加了网络直播,并且用户在观看直播内容的时候,还可以增加实时的发送互动内容,增加了直播的互动效果。With the continuous development of network communication technology, the ways of data transmission are becoming more and more diverse. For example, a smart terminal can perform high-speed data transmission through a mobile communication network (3G, 4G, etc.), or a WIFI network, or a wired network. While the transmission speed is developing, the method of live broadcasting of video content is based on the method of live broadcasting on TV, adding live broadcasting on the Internet, and when users watch live content, they can also send interactive content in real time, increasing the interactive effect of live broadcasting .

目前的直播方法,一般是通过摄像头在现场采集视频数据,以及麦克风采集现场的音频数据。将所述音频数据和视频数据进行编码后传输至用户终端,由用户终端对所述编码的数据进行解码播放,从而使得用户能够通过连接有网络的终端播放直播数据。The current live broadcast method generally collects video data at the scene through a camera, and collects audio data at the scene with a microphone. The audio data and video data are encoded and then transmitted to the user terminal, and the encoded data is decoded and played by the user terminal, so that the user can play live data through a terminal connected to the network.

由于现有的直播方法由智能终端直接播放,仅限于音、视频内容的播放,不利于用户获取更丰富的现场数据,以及不利于创造更佳的直播氛围。Since the existing live broadcast method is directly played by the smart terminal, it is limited to the playback of audio and video content, which is not conducive to the user's acquisition of richer on-site data, and is not conducive to creating a better live broadcast atmosphere.

发明内容Contents of the invention

本发明的目的在于提供一种基于混合现实的直播方法、装置和系统,以解决现有技术的直播方法,不利于用户获取更丰富的现场数据,以及不利于创造更佳的直播氛围的问题。The purpose of the present invention is to provide a live broadcast method, device and system based on mixed reality, so as to solve the problems that the live broadcast method in the prior art is not conducive to users obtaining more abundant live data and creating a better live broadcast atmosphere.

第一方面,本发明实施例提供了一种基于混合现实的直播方法,所述方法包括:In a first aspect, an embodiment of the present invention provides a live broadcast method based on mixed reality, the method comprising:

获取由现场数据采集端采集的视频数据和音频数据;Obtain the video data and audio data collected by the on-site data acquisition terminal;

根据所述视频数据,生成与所述视频数据匹配的三维场景图像;Generate a three-dimensional scene image matching the video data according to the video data;

在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。The collected video data is played in the 3D scene image, and the sceneized audio data is played according to the position of the user in the 3D scene.

结合第一方面,在第一方面的第一种可能实现方式中,所述方法还包括:With reference to the first aspect, in a first possible implementation manner of the first aspect, the method further includes:

接收用户的场景进入请求,根据所述请求,在所述三维场景中生成对应的虚拟形象;receiving a user's scene entry request, and generating a corresponding avatar in the three-dimensional scene according to the request;

采集用户的行为状态数据,根据所采集的行为状态数据,相应的控制所述三维场景中的虚拟形象执行相应的动作。The behavior state data of the user is collected, and the avatar in the three-dimensional scene is correspondingly controlled to perform corresponding actions according to the collected behavior state data.

结合第一方面的第一种可能实现方式,在第一方面的第二种可能实现方式中,所述方法还包括:With reference to the first possible implementation of the first aspect, in the second possible implementation of the first aspect, the method further includes:

在所述三维场景中显示其它用户在选择的对应位置处的虚拟形象以及动作,所述动作与其它用户的行为状态数据对应。The avatars and actions of other users at the selected corresponding positions are displayed in the three-dimensional scene, and the actions correspond to behavior state data of other users.

结合第一方面,在第一方面的第三种可能实现方式中,所述在所述三维场景图像中播放所采集的视频数据步骤包括:With reference to the first aspect, in a third possible implementation manner of the first aspect, the step of playing the collected video data in the 3D scene image includes:

检测所述视频数据中的人物所在的图像区域;Detecting the image area where the person in the video data is located;

根据所述人物所在的图像区域进行图像截取,将截取的图像区域在所述三维场景图像中播放。The image is intercepted according to the image area where the person is located, and the intercepted image area is played in the three-dimensional scene image.

第二方面,本发明实施例提供了一种基于混合现实的直播装置,所述装置包括:In a second aspect, an embodiment of the present invention provides a live broadcast device based on mixed reality, the device comprising:

数据获取单元,用于获取由现场数据采集端采集的视频数据和音频数据;The data acquisition unit is used to acquire video data and audio data collected by the on-site data acquisition terminal;

三维场景图像生成单元,用于根据所述视频数据,生成与所述视频数据匹配的三维场景图像;A three-dimensional scene image generating unit, configured to generate a three-dimensional scene image matching the video data according to the video data;

数据播放单元,用于在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。A data playing unit, configured to play the collected video data in the 3D scene image, and play the sceneized audio data according to the position of the user in the 3D scene.

结合第二方面,在第二方面的第一种可能实现方式中,所述装置还包括:With reference to the second aspect, in a first possible implementation manner of the second aspect, the device further includes:

虚拟形象生成单元,用于接收用户的场景进入请求,根据所述请求,在所述三维场景中生成对应的虚拟形象;The avatar generating unit is configured to receive a user's scene entry request, and generate a corresponding avatar in the three-dimensional scene according to the request;

第一动作控制显示单元,用于采集用户的行为状态数据,根据所采集的行为状态数据,相应的控制所述三维场景中的虚拟形象执行相应的动作。The first action control display unit is configured to collect user behavior state data, and correspondingly control the avatar in the three-dimensional scene to perform corresponding actions according to the collected behavior state data.

结合第二方面的第一种可能实现方式,在第二方面的第二种可能实现方式中,所述装置还包括:With reference to the first possible implementation manner of the second aspect, in the second possible implementation manner of the second aspect, the device further includes:

第二动作控制显示单元,在所述三维场景中显示其它用户在选择的对应位置处的虚拟形象以及动作,所述动作与其它用户的行为状态数据对应。The second action controls the display unit to display the avatars and actions of other users at the selected corresponding positions in the three-dimensional scene, and the actions correspond to the behavior state data of other users.

结合第二方面,在第二方面的第三种可能实现方式中,所述数据播放单元包括:With reference to the second aspect, in a third possible implementation manner of the second aspect, the data playback unit includes:

图像检测子单元,用于检测所述视频数据中的人物所在的图像区域;An image detection subunit, configured to detect the image area where the person in the video data is located;

图像截取子单元,用于根据所述人物所在的图像区域进行图像截取,将截取的图像区域在所述三维场景图像中播放。The image capture subunit is configured to perform image capture according to the image area where the person is located, and play the captured image area in the 3D scene image.

第三方面,本发明实施例提供了一种基于混合现实的直播系统,所述基于混合现实的直播系统包括行为数据采集模块、处理器、显示模块,其中:In a third aspect, an embodiment of the present invention provides a mixed reality-based live broadcast system, the mixed reality-based live broadcast system includes a behavior data collection module, a processor, and a display module, wherein:

所述行为数据采集模块用于采集用户的行为状态数据,并将采集的所述行为状态数据发送给处理器;The behavior data collection module is used to collect user behavior state data, and send the collected behavior state data to a processor;

所述处理器用于接收所采集的行为状态数据,以及接收由现场数据采集端采集的视频数据和音频数据,根据采集的所述视频数据生成对应的三维场景图像,在所述三维场景图像中播放所述视频数据,并在所述三维场景图像中生成用户的虚拟形象,根据所采集的行为状态数据,控制所述虚拟形象的运动状态;The processor is used to receive the collected behavior state data, and receive the video data and audio data collected by the on-site data collection terminal, generate a corresponding three-dimensional scene image according to the collected video data, and play it in the three-dimensional scene image the video data, and generate a virtual image of the user in the three-dimensional scene image, and control the motion state of the virtual image according to the collected behavior state data;

所述显示模块用于显示所述三维场景图像。The display module is used for displaying the 3D scene image.

结合第三方面,在第三方面的第一种可能实现方式中,所述行为数据采集模块和所述显示模块为头戴式虚拟头盔。With reference to the third aspect, in a first possible implementation manner of the third aspect, the behavior data collection module and the display module are head-mounted virtual helmets.

在本发明中,获取由现场数据采集端采集的视频数据和音频数据后,根据所述视频数据生成对应的三维场景图像,并在所述三维场景图像中播放所采集的视频数据,根据用户在所述三维场景的观看位置,相应的控制所述音频数据的播放,从而使得用户能够获取更为丰富的现场数据,从而能够创造更佳的直播氛围。In the present invention, after the video data and audio data collected by the on-site data collection terminal are acquired, a corresponding three-dimensional scene image is generated according to the video data, and the collected video data is played in the three-dimensional scene image. The viewing position of the three-dimensional scene controls the playback of the audio data accordingly, so that the user can obtain more abundant live data, thereby creating a better live broadcast atmosphere.

附图说明Description of drawings

图1是本发明第一实施例提供的基于混合现实的直播方法的实现流程图;Fig. 1 is the implementation flowchart of the live broadcast method based on mixed reality provided by the first embodiment of the present invention;

图2是本发明第二实施例提供的基于混合现实的直播方法的实现流程图;FIG. 2 is a flow chart of the realization of the live broadcast method based on mixed reality provided by the second embodiment of the present invention;

图3是本发明第三实施例提供的基于混合现实的直播方法的实现流程图;FIG. 3 is a flow chart of the implementation of the live broadcast method based on mixed reality provided by the third embodiment of the present invention;

图4是本发明第四实施例提供的基于混合现实的直播装置的结构示意图。Fig. 4 is a schematic structural diagram of a live broadcast device based on mixed reality provided by a fourth embodiment of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

本发明实施例的目的在于提供一种直播的现场氛围效果更佳的基于混合现实的直播方法,以解决现有技术中进行直播时,通常是直接播放的声音数据和音频数据,这种播放方式不能够有效的还原音视频现场数据,用户不能够获得直播的现场氛围。下面结合附图,对本发明作进一步的说明。The purpose of the embodiment of the present invention is to provide a live broadcast method based on mixed reality with better live broadcast live atmosphere effect, so as to solve the sound data and audio data that are usually played directly during the live broadcast in the prior art. The on-site audio and video data cannot be effectively restored, and users cannot obtain the on-site atmosphere of the live broadcast. Below in conjunction with accompanying drawing, the present invention will be further described.

实施例一:Embodiment one:

图1示出了本发明第一实施例提供的基于混合现实的直播方法的实现流程,详述如下:Figure 1 shows the implementation process of the mixed reality-based live broadcast method provided by the first embodiment of the present invention, which is described in detail as follows:

在步骤S101中,获取由现场数据采集端采集的视频数据和音频数据。In step S101, the video data and audio data collected by the on-site data collection terminal are acquired.

具体的,本发明实施例中所述现场数据采集端,可以为赛场、演唱会、电视节目等直播设备所使用的专业的摄像机和用于现场进行讲解的麦克风。所述摄像机的视频数据和麦克风的音频数据经过编码压缩后,通过网络发送至直播服务器,其它用户终端可以通过请求的方式,接入服务器,获取直播的视频数据和音频数据。Specifically, the on-site data acquisition terminal in the embodiment of the present invention may be a professional camera used by live broadcast equipment such as stadiums, concerts, and TV programs, and a microphone used for on-site explanations. The video data of the camera and the audio data of the microphone are encoded and compressed, and then sent to the live broadcast server through the network, and other user terminals can access the server by request to obtain the live video data and audio data.

当然,所述视频数据和音频数据也可以通过与计算机相连的摄像头、麦克见所采集的现场数据,或者还可以为智能手机等设备采集的现场数据。Of course, the video data and audio data may also be on-site data collected through a camera or microphone connected to a computer, or may also be on-site data collected by devices such as smart phones.

在本发明实施例中,所获取的视频数据为二维的图像数据,一般情况下,在所述视频数据中通常包括主播人像。根据直播的内容,可以将所述直播进行分类,比如可以包括知识讲解型直播、演唱型直播、赛事型直播、以及其它电视节目讲解型直播等。In the embodiment of the present invention, the acquired video data is two-dimensional image data, and generally, the video data usually includes a portrait of an anchor. According to the content of the live broadcast, the live broadcast can be classified, for example, it can include knowledge explaining live broadcast, singing live broadcast, event live broadcast, and other TV program explaining live broadcast.

在步骤S102中,根据所述视频数据,生成与所述视频数据匹配的三维场景图像。In step S102, according to the video data, a 3D scene image matching the video data is generated.

在本发明实施例中,所述三维场景图像,可以为用户预先存储的多个三维场景图像,当获取由现场数据采集端所采集的视频数据后,可以根据所述视频数据进行匹配。匹配的方法可以包括视频数据的环境图像与三维场景图像的相似度计算,当相似度超过一定阈值时,则认为所述视频数据匹配对应的三维场景数据。In the embodiment of the present invention, the 3D scene image may be a plurality of 3D scene images stored in advance by the user, and after the video data collected by the on-site data collection terminal is acquired, matching may be performed according to the video data. The matching method may include calculating the similarity between the environment image of the video data and the three-dimensional scene image, and when the similarity exceeds a certain threshold, the video data is considered to match the corresponding three-dimensional scene data.

或者,所述三维场景图像中还可以预设设置对应的音频特征。当所述三维场景图像的音频特征与采集的音频数据的相似度大于一定值时,则为采集的数据匹配该三维场景图像。Alternatively, corresponding audio features may also be preset in the three-dimensional scene image. When the similarity between the audio feature of the 3D scene image and the collected audio data is greater than a certain value, the collected data matches the 3D scene image.

当然,所述三维场景数据也可以根据采集的视频数据自动生成。生成的方法可以根据获取的视频数据中的图像,结合三维图像生成工具,自动生成对应的三维场景图像。或者还可以根据用户定义的直播类型,查找对应的三维场景图像。Of course, the 3D scene data can also be automatically generated according to the collected video data. The generation method can automatically generate a corresponding three-dimensional scene image according to the image in the acquired video data and in combination with a three-dimensional image generation tool. Alternatively, the corresponding 3D scene image may be searched according to the live broadcast type defined by the user.

比如,对于直播场景为演唱型直播,可以自动生成演唱会的三维场景图像,并且将获取的视频数据在演唱会场景中的大屏幕以及主台上播放。对于知识讲解型直播,可以生成教室场景,视频数据可以在讲台位置播放。For example, if the live broadcast scene is a concert-type live broadcast, the 3D scene image of the concert can be automatically generated, and the acquired video data can be played on the large screen and the main stage in the concert scene. For knowledge explanation live broadcast, classroom scenes can be generated, and video data can be played on the podium.

在步骤S103中,在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。In step S103, play the collected video data in the 3D scene image, and play the sceneized audio data according to the position of the user in the 3D scene.

在生成的三维场景图像中,可以在预先设置的位置处播放视频数据。用户可以在生成的三维场景图像中观看所述视频数据,获得更为丰富的现场数据,提升视频播放的现场氛围。In the generated three-dimensional scene image, video data can be played at a preset position. The user can watch the video data in the generated three-dimensional scene image, obtain richer on-site data, and improve the on-site atmosphere of video playback.

并且,本发明还包括用户在三维场景图像中的观看位置,获得与该位置对应的音频数据。其中,用户在三维场景图像中的观看位置,可以根据用户的请求而分配。所述场景化的所述音频数据,即为与所述位置相对应的声音。比如,用户的观看位置在位置A处时,则通过计算三维场景数据中用户与三维场景图像中的声音来源位置的关系,相应的控制左右声道的声音播放时间,从而模拟现场的声音效果。从而能够更进一步提升现场氛围的效果。Moreover, the present invention also includes the viewing position of the user in the three-dimensional scene image, and audio data corresponding to the position is obtained. Wherein, the viewing position of the user in the three-dimensional scene image may be assigned according to the user's request. The sceneized audio data is the sound corresponding to the position. For example, when the user's viewing position is at position A, by calculating the relationship between the user in the 3D scene data and the position of the sound source in the 3D scene image, the sound playback time of the left and right channels is controlled accordingly, thereby simulating the sound effect of the scene. This can further enhance the effect of the scene atmosphere.

本发明通过获取由现场数据采集端采集的视频数据和音频数据后,根据所述视频数据生成对应的三维场景图像,并在所述三维场景图像中播放所采集的视频数据,根据用户在所述三维场景的观看位置,相应的控制所述音频数据的播放,从而使得用户能够获取更为丰富的现场数据,从而能够创造更佳的直播氛围。In the present invention, after acquiring the video data and audio data collected by the on-site data collection terminal, a corresponding three-dimensional scene image is generated according to the video data, and the collected video data is played in the three-dimensional scene image. The viewing position of the three-dimensional scene controls the playback of the audio data accordingly, so that the user can obtain more abundant live data, thereby creating a better live broadcast atmosphere.

实施例二:Embodiment two:

图2示出了本发明第二实施例提供的基于混合现实的直播方法的实现流程,详述如下:FIG. 2 shows the implementation process of the mixed reality-based live broadcast method provided by the second embodiment of the present invention, which is described in detail as follows:

在步骤S201中,获取由现场数据采集端采集的视频数据和音频数据。In step S201, video data and audio data collected by the on-site data collection terminal are acquired.

在步骤S202中,根据所述视频数据,生成与所述视频数据匹配的三维场景图像。In step S202, according to the video data, a 3D scene image matching the video data is generated.

在步骤S203中,接收用户的场景进入请求,根据所述请求,在所述三维场景中生成对应的虚拟形象。In step S203, a user's scene entry request is received, and a corresponding avatar is generated in the three-dimensional scene according to the request.

在本发明实施例中,在生成了对应的三维场景图像后,在所述三维场景图像中可以包括多个虚拟形象位置,可以用于给接入到直播系统中的用户分配虚拟形象。比如用户接入后,可以查看当前处于可分配状态的位置,用户可以选择自己喜爱的位置后,激活该位置所对应的虚拟形象。In the embodiment of the present invention, after the corresponding 3D scene image is generated, the 3D scene image may include a plurality of avatar positions, which may be used to assign avatars to users connected to the live broadcast system. For example, after the user accesses, he can view the location that is currently available for allocation, and the user can select his favorite location and activate the avatar corresponding to the location.

在步骤S204中,采集用户的行为状态数据,根据所采集的行为状态数据,相应的控制所述三维场景中的虚拟形象执行相应的动作。In step S204, the user's behavior state data is collected, and the avatar in the three-dimensional scene is correspondingly controlled to perform corresponding actions according to the collected behavior state data.

在激活用户的虚拟形象后,还可以实时的采集用户的行为状态数据。所述行为状态数据的采集,可以通过虚拟现实头盔,或者其它传感设备进行采集。After the user's avatar is activated, the user's behavior status data can also be collected in real time. The collection of the behavior state data can be collected through a virtual reality helmet or other sensing devices.

在检测到用户的行为状态数据后,根据所述行为状态数据相应的控制虚拟形象执行动作。After the behavior state data of the user is detected, the avatar is correspondingly controlled to perform actions according to the behavior state data.

比如检测到用户伸手动作时,可以通过虚拟头盔上设置的红外传感器,或者根据加速度传感器,检测到用户手臂、胳膊等部位的运动的方向、速度和幅度等数据。根据所述数据相应的调整虚拟形象中的对应部位,执行对应的动作,比如手部的胳膊、手腕等部位执行相应的动作。For example, when the user's hand movement is detected, data such as the direction, speed and amplitude of the movement of the user's arms and arms can be detected through the infrared sensor set on the virtual helmet, or according to the acceleration sensor. Corresponding parts in the avatar are adjusted accordingly according to the data, and corresponding actions are performed, for example, parts such as arms and wrists of the hand perform corresponding actions.

作为本发明进一步优化的实施方式中,所述方法还可包括:在所述三维场景中显示其它用户在选择的对应位置处的虚拟形象以及动作,所述动作与其它用户的行为状态数据对应。As a further optimized implementation of the present invention, the method may further include: displaying avatars and actions of other users at selected corresponding positions in the three-dimensional scene, and the actions correspond to behavior state data of other users.

在所述三维场景图像中,当其它用户请求进入时,在所述三维场景图像中的对应位置创建和生成虚拟形象,并将生成的虚拟形象信息发送至服务器,由服务器将所述虚拟形象的信息发送至在查看直播的其它用户终端,并且在收听直播的其它用户终端中显示所述虚拟形象的动作状态。In the three-dimensional scene image, when other users request to enter, a virtual image is created and generated at the corresponding position in the three-dimensional scene image, and the generated virtual image information is sent to the server, and the server sends the information of the virtual image The information is sent to other user terminals viewing the live broadcast, and the action state of the avatar is displayed in other user terminals listening to the live broadcast.

在步骤S205中,在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。In step S205, the collected video data is played in the 3D scene image, and the sceneized audio data is played according to the position of the user in the 3D scene.

本发明实施例在实施例一的基础上,进一步对三维场景图像中增加了用户自身的虚拟形象,或者还包括同时在收看直播的其它用户的虚拟形象,使得用户能根据虚拟形象,够获得更佳的直播效果,方便用户进行更好的互动。On the basis of Embodiment 1, the embodiment of the present invention further adds the user's own avatar to the three-dimensional scene image, or also includes the avatars of other users who are watching the live broadcast at the same time, so that the user can obtain more information based on the avatar. The best live broadcast effect is convenient for users to interact better.

实施例三:Embodiment three:

图3示出了本发明第三实施例提供的基于混合现实的直播方法的实现流程,详述如下:Fig. 3 shows the implementation process of the live broadcast method based on mixed reality provided by the third embodiment of the present invention, which is described in detail as follows:

在步骤S301中,获取由现场数据采集端采集的视频数据和音频数据。In step S301, the video data and audio data collected by the on-site data collection terminal are acquired.

在步骤S302中,根据所述视频数据,生成与所述视频数据匹配的三维场景图像。In step S302, according to the video data, a 3D scene image matching the video data is generated.

在步骤S303中,检测所述视频数据中的人物所在的图像区域。In step S303, the image area where the person in the video data is located is detected.

具体的,本步骤对于视频数据中的人物所在的图像区域检测,可以根据预定的条件触发。比如当检测到直播类型为演唱型直播、知识讲解型直播等直播类型时,则开始检测视频数据中的人物图像。Specifically, this step can be triggered according to predetermined conditions for detecting the image area where the person in the video data is located. For example, when it is detected that the live broadcast type is singing live broadcast, knowledge explaining live broadcast and other live broadcast types, it starts to detect images of people in the video data.

另外,还可以接收用户的检测请求,根据所述请求在视频数据中进行图像检测。In addition, a detection request from a user may also be received, and image detection is performed in the video data according to the request.

在步骤S304中,根据所述人物所在的图像区域进行图像截取,将截取的图像区域在所述三维场景图像中播放,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。In step S304, perform image interception according to the image area where the person is located, play the intercepted image area in the 3D scene image, and play the sceneized audio data according to the position of the user in the 3D scene .

根据预先设定的人物模型,结合图像中的人物图像区域的变化信息,可以对视频数据中的人物图像进行检测检测,获得人物所在的图像区域,并对该区域进行截取。According to the preset character model, combined with the change information of the character image area in the image, the character image in the video data can be detected and detected, the image area where the character is located is obtained, and the area is intercepted.

将截取后的图像区域与三维场景图像融合时,可以使得人物得到更好的融合效果,使得收看直播的用户能够更大范围的查看到三维场景数据与人物相结合的图像。When the intercepted image area is fused with the 3D scene image, a better fused effect of the characters can be achieved, so that users watching the live broadcast can view images combined with the 3D scene data and the characters in a wider range.

实施例四:Embodiment four:

图4示出了本发明第四实施例提供的基于混合现实的直播装置的结构示意图,详述如下:FIG. 4 shows a schematic structural diagram of a live broadcast device based on mixed reality provided by a fourth embodiment of the present invention, which is described in detail as follows:

本发明实施例所述基于混合现实的直播装置,包括:The live broadcast device based on mixed reality described in the embodiment of the present invention includes:

数据获取单元401,用于获取由现场数据采集端采集的视频数据和音频数据;A data acquisition unit 401, configured to acquire video data and audio data collected by the on-site data acquisition terminal;

三维场景图像生成单元402,用于根据所述视频数据,生成与所述视频数据匹配的三维场景图像;A three-dimensional scene image generating unit 402, configured to generate a three-dimensional scene image matching the video data according to the video data;

数据播放单元403,用于在所述三维场景图像中播放所采集的视频数据,并根据用户在所述三维场景中的位置播放场景化的所述音频数据。The data playing unit 403 is configured to play the collected video data in the 3D scene image, and play the sceneized audio data according to the position of the user in the 3D scene.

优选的,所述装置还包括:Preferably, the device also includes:

虚拟形象生成单元,用于接收用户的场景进入请求,根据所述请求,在所述三维场景中生成对应的虚拟形象;The avatar generating unit is configured to receive a user's scene entry request, and generate a corresponding avatar in the three-dimensional scene according to the request;

第一动作控制显示单元,用于采集用户的行为状态数据,根据所采集的行为状态数据,相应的控制所述三维场景中的虚拟形象执行相应的动作。The first action control display unit is configured to collect user behavior state data, and correspondingly control the avatar in the three-dimensional scene to perform corresponding actions according to the collected behavior state data.

优选的,所述装置还包括:Preferably, the device also includes:

第二动作控制显示单元,在所述三维场景中显示其它用户在选择的对应位置处的虚拟形象以及动作,所述动作与其它用户的行为状态数据对应。The second action controls the display unit to display the avatars and actions of other users at the selected corresponding positions in the three-dimensional scene, and the actions correspond to the behavior state data of other users.

优选的,所述数据播放单元包括:Preferably, the data playback unit includes:

图像检测子单元,用于检测所述视频数据中的人物所在的图像区域;An image detection subunit, configured to detect the image area where the person in the video data is located;

图像截取子单元,用于根据所述人物所在的图像区域进行图像截取,将截取的图像区域在所述三维场景图像中播放。The image capture subunit is configured to perform image capture according to the image area where the person is located, and play the captured image area in the 3D scene image.

本发明实施例所述基于混合现实的直播装置,与实施例一至三所述基于混合现实的直播方法对应,在此不作重复赘述。The mixed reality-based live broadcast device described in the embodiment of the present invention corresponds to the mixed reality-based live broadcast method described in Embodiments 1 to 3, and will not be repeated here.

另外,本发明实施例还提供了一种基于混合现实的直播系统,所述系统包括:In addition, the embodiment of the present invention also provides a live broadcast system based on mixed reality, the system includes:

数据采集模块、处理器、显示模块,其中:Data acquisition module, processor, display module, wherein:

所述行为数据采集模块用于采集用户的行为状态数据,并将采集的所述行为状态数据发送给处理器;The behavior data collection module is used to collect user behavior state data, and send the collected behavior state data to a processor;

所述处理器用于接收所采集的行为状态数据,以及接收由现场数据采集端采集的视频数据和音频数据,根据采集的所述视频数据生成对应的三维场景图像,在所述三维场景图像中播放所述视频数据,并在所述三维场景图像中生成用户的虚拟形象,根据所采集的行为状态数据,控制所述虚拟形象的运动状态;The processor is used to receive the collected behavior state data, and receive the video data and audio data collected by the on-site data collection terminal, generate a corresponding three-dimensional scene image according to the collected video data, and play it in the three-dimensional scene image the video data, and generate a virtual image of the user in the three-dimensional scene image, and control the motion state of the virtual image according to the collected behavior state data;

所述显示模块用于显示所述三维场景图像。The display module is used for displaying the 3D scene image.

结合第三方面,在第三方面的第一种可能实现方式中,所述行为数据采集模块和所述显示模块为头戴式虚拟头盔。当然,不局限于此,所述行为数据采集模块还可以包括设置在手部、腿部的加速度传感器等,所述显示模块还可以为虚拟现实眼镜等设备。所述基于混合现实的直播系统与实施例一至三所述基于混合现实的直播方法对应。With reference to the third aspect, in a first possible implementation manner of the third aspect, the behavior data collection module and the display module are head-mounted virtual helmets. Of course, it is not limited thereto, the behavior data collection module may also include acceleration sensors installed on the hands and legs, and the display module may also be a device such as virtual reality glasses. The mixed reality-based live broadcast system corresponds to the mixed reality-based live broadcast method described in Embodiments 1 to 3.

在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the methods described in various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

CN201610639734.3A2016-08-052016-08-05 A live broadcast method, device and system based on mixed realityExpired - Fee RelatedCN106303555B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610639734.3ACN106303555B (en)2016-08-052016-08-05 A live broadcast method, device and system based on mixed reality

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610639734.3ACN106303555B (en)2016-08-052016-08-05 A live broadcast method, device and system based on mixed reality

Publications (2)

Publication NumberPublication Date
CN106303555A CN106303555A (en)2017-01-04
CN106303555Btrue CN106303555B (en)2019-12-03

Family

ID=57666044

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610639734.3AExpired - Fee RelatedCN106303555B (en)2016-08-052016-08-05 A live broadcast method, device and system based on mixed reality

Country Status (1)

CountryLink
CN (1)CN106303555B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108696740A (en)*2017-02-142018-10-23深圳梦境视觉智能科技有限公司A kind of live broadcasting method and equipment based on augmented reality
CN106937154A (en)*2017-03-172017-07-07北京蜜枝科技有限公司Process the method and device of virtual image
CN108933723B (en)*2017-05-192020-11-06腾讯科技(深圳)有限公司Message display method and device and terminal
CN107635131B (en)*2017-09-012020-05-19北京雷石天地电子技术有限公司Method and system for realizing virtual reality
CN107590817A (en)*2017-09-202018-01-16北京奇虎科技有限公司Image capture device Real-time Data Processing Method and device, computing device
CN107743263B (en)*2017-09-202020-12-04北京奇虎科技有限公司 Video data real-time processing method and device, and computing device
CN107633228A (en)*2017-09-202018-01-26北京奇虎科技有限公司Video data handling procedure and device, computing device
CN107592475A (en)*2017-09-202018-01-16北京奇虎科技有限公司Video data handling procedure and device, computing device
CN107613360A (en)*2017-09-202018-01-19北京奇虎科技有限公司 Video data real-time processing method and device, computing equipment
CN107705316A (en)*2017-09-202018-02-16北京奇虎科技有限公司Image capture device Real-time Data Processing Method and device, computing device
CN107680170A (en)*2017-10-122018-02-09北京奇虎科技有限公司View synthesis method and device based on virtual world, computing device
CN107613161A (en)*2017-10-122018-01-19北京奇虎科技有限公司 Video data processing method, device, and computing device based on virtual world
CN107680105B (en)*2017-10-122021-05-25北京奇虎科技有限公司 Real-time processing method, device and computing device for video data based on virtual world
CN109874021B (en)*2017-12-042021-05-11腾讯科技(深圳)有限公司Live broadcast interaction method, device and system
CN108014490A (en)*2017-12-292018-05-11安徽创视纪科技有限公司A kind of outdoor scene secret room based on MR mixed reality technologies
CN108492363B (en)*2018-03-262020-03-10Oppo广东移动通信有限公司Augmented reality-based combination method and device, storage medium and electronic equipment
CN109120990B (en)*2018-08-062021-10-15百度在线网络技术(北京)有限公司Live broadcast method, device and storage medium
SG11202106372YA (en)*2018-12-282021-07-29Dimension Nxg Private LtdA system and a method for generating a head mounted device based artificial intelligence (ai) bot
CN111698522A (en)*2019-03-122020-09-22北京竞技时代科技有限公司Live system based on mixed reality
KR102625902B1 (en)*2019-03-132024-01-17발루스 가부시키가이샤 Live Betrayal System and Live Betrayal Method
CN110087121B (en)*2019-04-302021-08-06广州虎牙信息科技有限公司Avatar display method, avatar display apparatus, electronic device, and storage medium
CN110602517B (en)*2019-09-172021-05-11腾讯科技(深圳)有限公司Live broadcast method, device and system based on virtual environment
CN111314773A (en)*2020-01-222020-06-19广州虎牙科技有限公司Screen recording method and device, electronic equipment and computer readable storage medium
CN111242704B (en)*2020-04-262020-12-08北京外号信息技术有限公司Method and electronic equipment for superposing live character images in real scene
CN111583415B (en)*2020-05-082023-11-24维沃移动通信有限公司 Information processing methods, devices and electronic equipment
CN114466202B (en)*2020-11-062023-12-12中移物联网有限公司Mixed reality live broadcast method, apparatus, electronic device and readable storage medium
CN113038262B (en)*2021-01-082025-01-14深圳市智胜科技信息有限公司 A panoramic live broadcast method and device
CN114173142A (en)*2021-11-192022-03-11广州繁星互娱信息科技有限公司Object live broadcast display method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101032186A (en)*2004-09-032007-09-05P·津筥Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN103460256A (en)*2011-03-292013-12-18高通股份有限公司Anchoring virtual images to real world surfaces in augmented reality systems
CN205071236U (en)*2015-11-022016-03-02徐文波Wear -type sound video processing equipment
CN105653020A (en)*2015-07-142016-06-08朱金彪Time traveling method and apparatus and glasses or helmet using same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101055494B (en)*2006-04-132011-03-16上海虚拟谷数码科技有限公司Dummy scene roaming method and system based on spatial index cube panoramic video
CN102737399A (en)*2012-06-202012-10-17北京水晶石数字科技股份有限公司Method for roaming ancient painting
CN104869524B (en)*2014-02-262018-02-16腾讯科技(深圳)有限公司Sound processing method and device in three-dimensional virtual scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101032186A (en)*2004-09-032007-09-05P·津筥Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN103460256A (en)*2011-03-292013-12-18高通股份有限公司Anchoring virtual images to real world surfaces in augmented reality systems
CN105653020A (en)*2015-07-142016-06-08朱金彪Time traveling method and apparatus and glasses or helmet using same
CN205071236U (en)*2015-11-022016-03-02徐文波Wear -type sound video processing equipment

Also Published As

Publication numberPublication date
CN106303555A (en)2017-01-04

Similar Documents

PublicationPublication DateTitle
CN106303555B (en) A live broadcast method, device and system based on mixed reality
US8990842B2 (en)Presenting content and augmenting a broadcast
JP6724110B2 (en) Avatar display system in virtual space, avatar display method in virtual space, computer program
CN106648083B (en)Enhanced playing scene synthesis control method and device
CN105491353B (en)Remote monitoring method and device
US11587292B2 (en)Triggered virtual reality and augmented reality events in video streams
US20190073830A1 (en)Program for providing virtual space by head mount display, method and information processing apparatus for executing the program
JP6688378B1 (en) Content distribution system, distribution device, reception device, and program
CN111246232A (en)Live broadcast interaction method and device, electronic equipment and storage medium
CN102595212A (en)Simulated group interaction with multimedia content
JP2018515972A (en) Control of personal space content presented by a head-mounted display
WO2012039871A2 (en)Automatic customized advertisement generation system
CN105915849A (en)Virtual reality sports event play method and system
JP2017005709A (en) Broadcast haptics architecture
CN108322474B (en)Virtual reality system based on shared desktop, related device and method
US20240259627A1 (en)Same-screen interaction control method and apparatus, and electronic device and non-transitory storage medium
KR102200239B1 (en)Real-time computer graphics video broadcasting service system
JP6609078B1 (en) Content distribution system, content distribution method, and content distribution program
CN116233513A (en)Virtual gift special effect playing processing method, device and equipment in virtual reality live broadcasting room
CN112929685A (en)Interaction method and device for VR live broadcast room, electronic equipment and storage medium
KR20130067855A (en)Apparatus and method for providing virtual 3d contents animation where view selection is possible
CN110198457B (en)Video playing method and device, system, storage medium, terminal and server thereof
JP2020109896A (en)Video distribution system
CN117544808A (en)Device control method, storage medium, and electronic device
JP2020108177A (en)Content distribution system, distribution device, reception device, and program

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20180613

Address after:518000 Guangdong Shenzhen Nanshan District Nantou Street Nanshan Road 1088 South Garden maple leaf building 10L

Applicant after:Shenzhen Modern Century Technology Co.,Ltd.

Address before:518000, 7 floor, Fuli building, 1 KFA Road, Nanshan street, Nanshan District, Shenzhen, Guangdong.

Applicant before:SHENZHEN BEANVR TECHNOLOGY CO.,LTD.

GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20191203


[8]ページ先頭

©2009-2025 Movatter.jp