本申请是申请号为201280032058.3,申请日为2012年6月27 日,题为“用于自适应音频信号产生、编码和呈现的系统和方法”的 中国发明专利申请的分案申请。This application is a divisional application of the Chinese invention patent application with application number 201280032058.3, application date June 27, 2012, and entitled "System and method for adaptive audio signal generation, encoding and presentation".
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求2011年7月1日提交的美国临时申请No.61/504,005 和2012年4月20日提交的美国临时申请No.61/636,429的优先权, 这两个申请出于所有目的整体通过参考被并入于此。This application claims priority to US Provisional Application No. 61/504,005, filed July 1, 2011, and US Provisional Application No. 61/636,429, filed April 20, 2012, both of which are incorporated herein by reference in their entirety for all purposes.
技术领域Technical Field
一个或更多个实现方式一般涉及音频信号处理,并且更具体地涉 及供电影院、家庭和其它环境之用的混合对象和基于声道的音频处 理。One or more implementations relate generally to audio signal processing, and more particularly to hybrid object and channel-based audio processing for use in cinemas, homes, and other environments.
背景技术Background Art
在背景技术部分中讨论的主题不应该仅仅由于它在背景技术部 分中被提到而被假设为现有技术。类似地,在背景技术部分中提到或 者与背景技术部分的主题关联的问题不应该被假设为在现有技术中 已经被先前认识到。在背景技术部分中的主题仅仅代表不同的方法, 在其中及其本身也可以是发明。The subject matter discussed in the Background section should not be assumed to be prior art simply because it is mentioned in the Background section. Similarly, problems mentioned in or related to the subject matter in the Background section should not be assumed to have been previously recognized in the prior art. The subject matter in the Background section merely represents different approaches, which may also be inventions in themselves.
自从将声音引入胶片(film)以来,已经存在用于捕获创作者的 对于运动图像音轨的艺术意图并且在电影院环境中准确地再现它的 技术的稳定的发展。电影声音的基本的作用是支持在屏幕上示出的故 事。典型的电影音轨包括与屏幕上的图像和元素对应的许多不同的声 音元素,从不同的屏幕上的元素发出的对话、噪声、以及声音效果, 以及与背景音乐和环境效果结合以便创建整体观众体验。创作者和制 作者的艺术意图代表他们的如下期望,即具有以尽可能紧密地对应于 在屏幕上示出的东西的方式对于声源位置、强度、移动和其它类似参 数再现的这些声音。Since the introduction of sound to film, there has been a steady development of technology for capturing the creator's artistic intent for a moving picture soundtrack and accurately reproducing it in a cinema environment. The fundamental role of film sound is to support the story being told on screen. A typical film soundtrack includes many different sound elements corresponding to the images and elements on screen, along with dialogue, noises, and sound effects emanating from the various on-screen elements, combined with background music and ambient effects to create an overall audience experience. The creator's and producer's artistic intent represents their desire to have these sounds reproduced in a way that corresponds as closely as possible to what is shown on screen, with respect to source location, intensity, movement, and other similar parameters.
当前电影创作、分发和回放遭受约束真实地沉浸和逼真的音频的 创建的限制。传统的基于声道的音频系统将以扬声器供给形式的音频 内容发送到回放环境中的单独的扬声器,诸如立体声和5.1系统。数 字电影的引入已经创建了对于胶片上的声音的新标准,诸如音频的高 达16声道的并入以便允许内容创作者有更大的创造力,以及对于观 众的更包围的和现实的听觉体验。7.1环绕系统的引入已经提供通过 将现有的左和右环绕声道分离成四个区域(zones)增大环绕声道的数 量的新格式,因此对于声音设计者和混合者增大范围来控制剧场中的 音频元素的定位。Current movie creation, distribution, and playback suffer from limitations that restrict the creation of truly immersive and realistic audio. Traditional channel-based audio systems send audio content in the form of speaker feeds to individual speakers in the playback environment, such as stereo and 5.1 systems. The introduction of digital cinema has created new standards for sound on film, such as the incorporation of up to 16 channels of audio to allow greater creativity for content creators and a more enveloping and realistic listening experience for audiences. The introduction of 7.1 surround systems has provided a new format that increases the number of surround channels by separating the existing left and right surround channels into four zones, thereby increasing the scope for sound designers and mixers to control the positioning of audio elements in the theater.
为了进一步改善收听者体验,虚拟三维环境中的声音的回放已经 变为研究和开发增加的区域。声音的空间表现利用作为具有表观 (apparent)源位置的关联参数源描述(例如,3D坐标)、表观源宽 度和其它参数的音频信号的音频对象。基于对象的音频越来越被用于 许多当前多媒体应用,诸如数字电影、视频游戏、模拟器和3D视频。To further improve the listener experience, the playback of sound in a virtual three-dimensional environment has become an area of increasing research and development. Spatial representation of sound utilizes audio objects, which are audio signals with associated parametric source descriptions of apparent source positions (e.g., 3D coordinates), apparent source widths, and other parameters. Object-based audio is increasingly being used in many current multimedia applications, such as digital movies, video games, simulators, and 3D video.
扩展超出传统的扬声器供给和基于声道的音频作为用于分布空 间音频的手段是关键的,并且对保持允许收听者/展出者自由选择适合 他们的个人需要或者预算的回放配置并且具有对于他们选择的配置 特定地呈现的音频的承诺的基于模式(model)的音频描述已经存在 相当大的兴趣。在高水平处,目前存在四个主要的空间音频描述格式: 其中音频被描述为意图用于标称扬声器位置处的扬声器的信号的扬 声器供给;其中音频被描述为通过预定义的阵列中的虚拟或者实际麦 克风捕获的信号的麦克风供给;其中依据在所描述的位置处音频事件 的序列来描述音频的基于模式的描述;以及其中音频由到达收听者耳 朵的信号描述的两路立体声(binaural)。这四个描述格式经常与将 音频信号转换为扬声器供给的一个或更多个呈现技术关联。当前呈现 技术包括:摇移,其中音频流通过使用一组摇摄规则和已知或假设的 扬声器位置被转换为扬声器供给(典型地在分发之前被呈现);立体 混响声(Ambisonics),其中麦克风信号被转换为用于扬声器的可缩 放的(scalable)阵列的供给(典型地在分发之后被呈现);WFS(波 场合成),其中声音事件被转换为适当的扬声器信号以便合成声场(典 型地在分发之后被呈现);以及两路立体声,其中L/R(左/右)双声 道的信号典型地使用头戴耳机(headphones)而且通过使用扬声器和 串扰抵消被传送给L/R耳朵(在分发之前或者之后呈现)。在这些格 式中,扬声器供给格式是最常见的,因为它是简单的和有效的。最好 的声音结果(最准确的,最可靠的)通过直接混合/监视和分发给扬声 器供给来实现,因为在内容创作者和收听者之间不存在处理。如果预 先已知回放系统,则扬声器供给描述通常提供最高保真度。然而,在 许多实际应用中,回放系统是未知的。基于模式的描述被认为适应性 最强,因为它不进行关于呈现技术的假设并且因此最容易应用于任何 呈现技术。虽然基于模式的描述有效地捕获空间信息,但是随着音频 源的数量增大它变得非常低效。Expanding beyond traditional speaker feeds and channel-based audio as a means for distributing spatial audio is crucial, and there has been considerable interest in model-based audio descriptions that allow listeners/exhibitors the freedom to choose a playback configuration that suits their personal needs or budget, while ensuring that audio is rendered specifically for their chosen configuration. At a high level, there are currently four main spatial audio description formats: speaker feeds, in which audio is described as signals intended for speakers at nominal speaker locations; microphone feeds, in which audio is described as signals captured by virtual or real microphones in a predefined array; model-based descriptions, in which audio is described in terms of a sequence of audio events at described locations; and binaural, in which audio is described by the signals reaching the listener's ears. These four description formats are often associated with one or more rendering techniques that convert audio signals into speaker feeds. Current rendering techniques include panning, where an audio stream is converted to speaker feeds using a set of panning rules and known or assumed speaker positions (typically rendered before distribution); ambisonics, where microphone signals are converted to feeds for a scalable array of speakers (typically rendered after distribution); WFS (wave field synthesis), where sound events are converted to appropriate speaker signals to synthesize a sound field (typically rendered after distribution); and binaural, where a dual L/R (left/right) channel signal is typically delivered to the left and right ears using headphones and crosstalk cancellation (either before or after distribution). Of these formats, the speaker feed format is the most common because it is simple and effective. The best sounding results (most accurate, most reliable) are achieved by direct mixing/monitoring and distribution to the speaker feeds, as there is no processing between the content creator and the listener. If the playback system is known in advance, speaker-delivered descriptions typically provide the highest fidelity. However, in many practical applications, the playback system is unknown. Pattern-based descriptions are considered the most adaptable because they make no assumptions about the rendering technology and are therefore the easiest to apply to any rendering technology. While pattern-based descriptions effectively capture spatial information, they become significantly less efficient as the number of audio sources increases.
多年来电影系统已经特征化为具有左、中心、右以及偶尔‘左内 (inner left)’和‘右内(inner right)’声道的形式的离散的屏幕 声道。这些离散的源通常具有足够的频率响应和功率处理(power handling)以便允许声音被准确地放置在屏幕的不同区域中,并且容 许随着声音在位置之间被移动或摇移而音色匹配。在改善收听者体验 方面的近期发展企图相对于收听者准确地再现声音的位置。在5.1设 立中,环绕“区域”由扬声器的阵列组成,所有的扬声器在每个左环 绕或右环绕区域内携带相同的音频信息。这种阵列在'环境'或者扩散 环绕效果的情况下可以是有效的,然而,在日常生活中许多声音效果 来源于随机放置的点源。例如,在餐厅中,环境音乐可以显然从四处 都被播放,虽然细小但是离散的声音来源于特定的点:来自一个点的 人聊天、来自另一个点的刀在盘子上的卡嗒声(clatter)。能够将这 种声音离散地放置在观众席周围可以在没有引人注意地明显的情况下添加加强的逼真感。头上的声音也是环绕定义的重要成分。在实际 世界中,声音来源于所有方向,而不是总是来自单个水平面。如果声 音可以从头上被听到,换句话说从'上半球'被听到,增加的真实感可 以被实现。然而当前系统不提供在各种不同的回放环境中对于不同音 频类型的声音的真正准确的再现。使用现有的系统要求实际回放环境 的大量处理、知识和配置以尝试位置特定的声音的准确的表示,因此 呈现对于大多数应用不实际的当前系统。For many years, movie systems have been characterized by discrete screen channels in the form of left, center, right, and occasionally 'inner left' and 'inner right' channels. These discrete sources typically have sufficient frequency response and power handling to allow sounds to be accurately placed in different areas of the screen, and to allow timbre matching as sounds are moved or panned between locations. Recent developments in improving the listener experience attempt to accurately reproduce the location of sounds relative to the listener. In a 5.1 setup, surround 'zones' consist of an array of speakers, all of which carry the same audio information within each left or right surround zone. Such arrays can be effective for 'ambient' or diffuse surround effects, however, in everyday life many sound effects originate from randomly placed point sources. For example, in a restaurant, ambient music may be played from apparently all around, while small but discrete sounds originate from specific points: people chatting at one point, the clatter of a knife on a plate at another. Being able to discretely place such sounds around the auditorium can add a heightened sense of realism without being noticeably obvious. Overhead sound is also an important component of the definition of surround. In the real world, sound originates from all directions, not always from a single horizontal plane. If sounds could be heard overhead, in other words, from the 'upper hemisphere,' an increased sense of realism could be achieved. However, current systems do not provide truly accurate reproduction of sounds for different audio types in a variety of different playback environments. Using existing systems requires extensive processing, knowledge, and configuration of the actual playback environment to attempt an accurate representation of location-specific sounds, thus rendering current systems impractical for most applications.
所需要的是,支持多个屏幕声道的系统,得到对于屏幕上的声音 或者对话的增大的清晰度和改善的视听觉的相干性,以及能够在环绕 区域中任何地方精确定位源以便改善从屏幕到房间的视听转变。例 如,如果在屏幕上的角色在房间内看向声源,则声音工程师(“混合 者”)应该具有精确定位声音使得它匹配角色的视线的能力并且效果 将在所有观众中是一致的。然而,在传统的5.1或者7.1环绕声混合 中,效果高度地依赖于收听者的座位位置,其对于大多数大规模的收 听环境是不利的。增大的环绕分辨率创造了新的机会来以房间中心的 方式利用声音,与传统方法相反,其中假设单个收听者在“最佳听音 位置(sweet spot)”处来创建内容。What is needed is a system that supports multiple screen channels, resulting in increased clarity and improved audiovisual coherence for on-screen sounds or dialogue, as well as the ability to precisely position sources anywhere in the surround area to improve the audiovisual transition from screen to room. For example, if an on-screen character is looking at a sound source in the room, the sound engineer ("mixer") should have the ability to precisely position the sound so that it matches the character's line of sight and the effect will be consistent across the entire audience. However, in traditional 5.1 or 7.1 surround sound mixes, the effect is highly dependent on the listener's seating position, which is disadvantageous for most large-scale listening environments. Increased surround resolution creates new opportunities to utilize sound in a room-centered manner, as opposed to traditional approaches where content is created assuming a single listener is in the "sweet spot."
除了空间问题以外,当前的多声道现有技术系统遭受关于音色的 问题。例如,一些声音的音色质量,诸如从破了的管出去的蒸汽嘶嘶 声(hissing),可以遭受由扬声器的阵列再现。将特定的声音引导到 单个扬声器的能力给予混合者消除阵列再现的伪迹(artifacts)和向 观众传递更现实的体验的机会。传统上,环绕扬声器不支持大屏幕声 道支持的相同的全范围的音频频率和水平。历史上,这对于混合者已 经引起问题,减少他们的从屏幕到房间自由地移动全范围声音的能 力。结果,剧场拥有者没有感觉逼迫来升级他们的环绕声道配置,防 止更高质量装备的广泛的采用。In addition to spatial issues, current multi-channel prior art systems suffer from issues related to timbre. For example, the timbre quality of some sounds, such as the hissing of steam from a broken pipe, can suffer when reproduced by an array of speakers. The ability to direct specific sounds to a single speaker gives mixers the opportunity to eliminate artifacts reproduced by the array and deliver a more realistic experience to the audience. Traditionally, surround speakers do not support the same full range of audio frequencies and levels as large-screen channels. Historically, this has caused problems for mixers, reducing their ability to freely move full-range sounds from the screen to the room. As a result, theater owners have felt less compelled to upgrade their surround channel configurations, preventing widespread adoption of higher-quality equipment.
发明内容Summary of the Invention
针对电影声音格式和包括新的扬声器布局(声道配置)和关联的 空间描述格式的处理系统,来描述系统和方法。自适应音频系统和格 式被定义为支持多个呈现技术。音频流与元数据一起被发送,该元数 据描述包括音频流的期望位置的“混合者的意图”。位置可以被表示 为命名的(named)声道(来自预定义的声道配置内)或者作为三维 的位置信息。这个声道加上对象格式结合了最佳的基于声道和基于模 式的音频场景描述方法。对于自适应音频系统的音频数据包括许多独 立的单声道音频流。每个流具有与它关联的元数据,其指定流是基于 声道的还是基于对象的流。基于声道的流具有利用声道名字编码的呈 现信息;并且基于对象的流具有通过在更多的关联的元数据中编码的 数学表达式编码的位置信息。原始的独立的音频流被封装作为包含所 有音频数据的单个串行的比特流。这个配置允许根据非自我中心的 (allocentric)参考系呈现声音,在其中声音的呈现位置基于回放环 境的特性(例如,房间尺寸、形状等)以便对应于混合者的意图。对 象位置元数据包含为使用房间中的可用的扬声器位置正确地播放声 音所需的适当的非自我中心的参考系信息,该房间被设立来播放自适 应音频内容。这使得能够针对特别的回放环境最佳地混合声音,特别 的回放环境可以与声音工程师体验的混合环境不同。Systems and methods are described for a movie sound format and processing system including a new speaker layout (channel configuration) and an associated spatial description format. An adaptive audio system and format is defined to support multiple rendering technologies. Audio streams are sent along with metadata that describes the "mixer's intent" including the desired positions of the audio streams. Positions can be represented as named channels (from within a predefined channel configuration) or as three-dimensional position information. This channel plus object format combines the best of channel-based and pattern-based audio scene description methods. The audio data for the adaptive audio system includes many independent mono audio streams. Each stream has metadata associated with it that specifies whether the stream is channel-based or object-based. Channel-based streams have presentation information encoded using channel names; and object-based streams have position information encoded by mathematical expressions encoded in more associated metadata. The original independent audio streams are encapsulated as a single serial bitstream containing all the audio data. This configuration allows for rendering sounds according to an allocentric reference frame, where the sound's rendering position is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent. The object position metadata contains the appropriate allocentric reference frame information needed to correctly play the sound using the available speaker positions in the room set up to play adaptive audio content. This enables optimal mixing of sounds for a specific playback environment, which may differ from the mixing environment experienced by the sound engineer.
自适应音频系统通过诸如改善的房间均衡化和环绕低音管理之 类的这种好处改善在不同房间中的音频质量,使得扬声器(无论在屏 幕上还是屏幕外)可以由混合者在没有必须考虑音色匹配的情况下自 由地解决。自适应音频系统增加了动态音频对象的灵活性和功率到传 统的基于声道的工作流程中。这些音频对象允许创作者与任何特定的 回放扬声器配置(包括头上的扬声器)无关地控制离散声音元素。该 系统还为后制作处理引入新的效率,允许声音工程师有效地捕获所有 他们的意图并且随后在实时监视中,或者自动产生环绕声音7.1和5.1 版本。The adaptive audio system improves audio quality in different rooms through benefits such as improved room equalization and surround bass management, allowing mixers to freely adjust speakers (both on-screen and off-screen) without having to consider timbre matching. The adaptive audio system adds the flexibility and power of dynamic audio objects to traditional channel-based workflows. These audio objects allow creators to control discrete sound elements independently of any specific playback speaker configuration, including overhead speakers. The system also introduces new efficiencies to post-production processing, allowing sound engineers to effectively capture all their intent and then monitor in real time or automatically generate surround sound 7.1 and 5.1 versions.
自适应音频系统通过在数字电影处理器内将音频本体(essence) 和艺术意图包封在单个轨道文件中来简化分发,其可以在宽范围的剧 场配置中被忠实地回放。当混合和呈现利用相同的声道配置和单个清 单(inventory)(其向下适应到呈现配置(即,下混合))时,该系 统提供艺术意图的最佳再现。Adaptive audio systems simplify distribution by encapsulating the audio essence and artistic intent within a single track file within a digital cinema processor, which can be faithfully played back in a wide range of theatrical configurations. This system provides optimal reproduction of artistic intent when mixing and presentation utilize the same channel configuration and a single inventory that is down-adapted to the presentation configuration (i.e., downmixed).
通过涉及电影声音平台的实施例提供这些和其它优点,解决当前 的系统限制并且传递超出目前可用的系统的音频体验。These and other advantages are provided by embodiments involving a cinema sound platform that addresses current system limitations and delivers an audio experience that goes beyond currently available systems.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
在下面附图中相似的附图标记被用来指的是相似的元件。虽然以 下附图描绘各种示例,但是一个或更多个实现方式不限于在图中描绘 的示例。In the following drawings, like reference numerals are used to refer to like elements. Although the following drawings depict various examples, one or more implementations are not limited to the examples depicted in the drawings.
图1是按照一个实施例的利用自适应音频系统的音频创建和回 放环境的最高一级的概述。Figure 1 is a top-level overview of an audio creation and playback environment utilizing an adaptive audio system, according to one embodiment.
图2示出按照一个实施例的声道和基于对象的数据的组合以便 产生自适应音频混合。Figure 2 illustrates the combination of channel and object-based data to produce an adaptive audio mix according to one embodiment.
图3是按照一个实施例的示出创建、封装和呈现自适应音频内容 的工作流程的框图。Figure 3 is a block diagram illustrating the workflow for creating, packaging, and presenting adaptive audio content, according to one embodiment.
图4是按照一个实施例的自适应音频系统的呈现阶段的框图。4 is a block diagram of the rendering phase of an adaptive audio system, according to one embodiment.
图5是按照一个实施例的列出对于自适应音频系统的元数据类 型和关联的元数据元素的表格。Figure 5 is a table listing metadata types and associated metadata elements for an adaptive audio system, according to one embodiment.
图6是示出按照一个实施例的对于自适应音频系统的后制作和 主控的图。FIG6 is a diagram illustrating post-production and mastering for an adaptive audio system according to one embodiment.
图7是按照一个实施例的对于使用自适应音频文件的数字电影 封装处理的示例工作流程的图。Figure 7 is a diagram of an example workflow for a digital movie packaging process using adaptive audio files, according to one embodiment.
图8是在典型的观众席中的供自适应音频系统使用的建议的扬 声器位置的示例布局的俯视图。Figure 8 is a top view of an example layout of suggested speaker locations for use with an adaptive audio system in a typical auditorium.
图9是供典型的观众席之用的屏幕处的建议的扬声器位置的示 例布置的正视图。Figure 9 is a front view of an example arrangement of suggested speaker locations at a screen for a typical auditorium.
图10是在典型的观众席中的供自适应音频系统使用的建议的扬 声器位置的示例布局的侧视图。Figure 10 is a side view of an example layout of suggested speaker locations for use with an adaptive audio system in a typical auditorium.
图11是按照一个实施例的顶部环绕扬声器和侧面环绕扬声器相 对于参考点的放置的示例。Figure 11 is an example of the placement of top surround speakers and side surround speakers relative to a reference point according to one embodiment.
具体实施方式DETAILED DESCRIPTION
针对于支持多个呈现技术的自适应音频系统和关联的音频信号 和数据格式,来描述系统和方法。在此描述的一个或更多个实施例的 方面可以被实现在音频或者视听系统中,该系统在混合、呈现和回放 系统中处理源音频信息,该混合、呈现和回放系统包括执行软件指令 的处理装置或者一个或更多个计算机。所描述的实施例中的任意一个 可以被单独使用或者以任意组合方式彼此一起使用。虽然各种实施例 可以已经被现有技术的各种不足促动,其可能在说明书中的一个或更 多个位置中被讨论或者暗指,但是实施例未必解决这些不足中的任意 一个。换句话说,不同实施例可以解决可能在说明书中讨论的不同不 足。一些实施例可以仅仅部分地解决可能在说明书中讨论的一些不足 或者仅仅一个不足,并且一些实施例可以不解决这些不足中的任意一 个。Systems and methods are described for an adaptive audio system and associated audio signals and data formats that support multiple rendering technologies. Aspects of one or more embodiments described herein can be implemented in an audio or audio-visual system that processes source audio information in a mixing, rendering, and playback system that includes a processing device or one or more computers that execute software instructions. Any of the described embodiments can be used alone or in any combination together. Although various embodiments may have been motivated by various deficiencies of the prior art, which may be discussed or alluded to in one or more locations in the specification, the embodiments may not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or only one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
对于本说明书,以下术语具有关联的意义:For the purposes of this specification, the following terms have the associated meanings:
声道或者音频声道:单声道的音频信号或者音频流加上元数据, 在元数据中位置被编码为声道ID,例如“Left Front”或者“Right Top Surround”。声道对象可以驱动多个扬声器,例如,“Left Surround” 声道(Ls)将供给Ls阵列中的所有扬声器。Channel or Audio Channel: A mono audio signal or audio stream plus metadata where the position is encoded as a channel ID, such as "Left Front" or "Right Top Surround". A channel object can drive multiple speakers, for example, the "Left Surround" channel (Ls) will feed all speakers in the Ls array.
声道配置:具有关联的标称位置的扬声器区域的预定义的组,例 如5.1、7.1等等;5.1指的是六声道环绕声音频系统,具有前面的左 和右声道、中心声道、两个环绕声道以及亚低音扬声器声道;7.1指 的是八声道环绕系统,其向5.1系统添加两个额外的环绕声道。5.1和 7.1配置的示例包括环绕系统。Channel Configuration: A predefined group of speaker zones with associated nominal positions, such as 5.1, 7.1, and so on; 5.1 refers to a six-channel surround sound audio system with front left and right channels, a center channel, two surround channels, and a subwoofer channel; 7.1 refers to an eight-channel surround system that adds two additional surround channels to a 5.1 system. Examples of 5.1 and 7.1 configurations include surround systems.
扬声器:呈现音频信号的音频换能器或者一组换能器。Loudspeaker: An audio transducer or group of transducers that renders an audio signal.
扬声器区域:可以被唯一地提及并且接收单个音频信号的一个或 更多个扬声器的阵列,例如如在电影中典型地发现的“Left Surround”,并且特别地用于排除或包括对象呈现。Speaker Zone: An array of one or more speakers that can be referred to uniquely and receives a single audio signal, such as "Left Surround" as typically found in movies, and is particularly used to exclude or include object presentation.
扬声器声道或者扬声器供给声道:与定义的扬声器配置内的扬声 器区域或者命名的扬声器关联的音频声道。扬声器声道被使用关联的 扬声器区域来标称呈现。Speaker Channel or Speaker Supply Channel: An audio channel associated with a speaker zone or named speaker within a defined speaker configuration. Speaker channels are nominally rendered using the associated speaker zone.
扬声器声道组:与声道配置(例如立体声轨道、单轨道等)对应 的一组一个或更多个扬声器声道。Speaker channel group: A group of one or more speaker channels corresponding to a channel configuration (e.g., stereo track, mono track, etc.).
对象或者对象声道:具有参数源描述(诸如表观源位置(例如 3D坐标)、表观源宽度等)的一个或更多个音频声道。音频流加上 元数据,在元数据中位置被编码为在空间中的3D位置。Object or object channel: One or more audio channels with a parametric source description such as apparent source position (e.g. 3D coordinates), apparent source width, etc. The audio stream is supplemented with metadata where the position is encoded as a 3D position in space.
音频节目:整组的扬声器声道和/或对象声道以及关联的元数据, 该元数据描述期望的空间音频表现。Audio program: A complete set of loudspeaker channels and/or object channels and associated metadata describing the desired spatial audio performance.
非自我中心的参考:空间参考,在其中音频对象相对于呈现环境 内的特征(诸如房间壁和拐角)、标准扬声器位置、以及屏幕位置(例 如,房间的左前方拐角)被定义。Allocentric reference: A spatial reference in which audio objects are defined relative to features within the rendering environment (such as room walls and corners), standard speaker positions, and screen positions (e.g., the front left corner of the room).
自我中心的(egocentric)参考:空间参考,在其中音频对象相 对于(观众)收听者的视角被定义并且经常被指定为相对于收听者的 角度(例如,收听者向右30度)。Egocentric reference: A spatial reference in which the perspective of an audio object relative to the (viewer) listener is defined and is often specified as an angle relative to the listener (e.g., 30 degrees to the right of the listener).
帧:帧较短,总的音频节目被划分成的独立地可解码的片段。音 频帧率和边界典型地与视频帧对齐。Frame: A frame is a short, independently decodable segment into which the total audio program is divided. The audio frame rate and boundaries are typically aligned with the video frames.
自适应音频:基于声道的音频信号和/或基于对象的音频信号加 上元数据,该元数据基于回放环境来呈现音频信号。Adaptive audio: Channel-based audio signals and/or object-based audio signals plus metadata that renders the audio signals based on the playback environment.
在本申请中描述的电影声音格式和处理系统(还被称为“自适应 音频系统”)利用新的空间音频描述和呈现技术来允许增强观众沉浸 感、更艺术地进行控制、系统灵活和可缩放、以及易于安装与维护。 电影音频平台的实施例包括若干分立组件,包括混合工具、打包机 (packer)/编码器、拆包/解码器、剧院内最终混合和呈现组件、新 的扬声器设计以及连网的放大器。该系统包括对于要由内容创建者和 展出者使用的新的声道配置的推荐。该系统利用基于模型的描述,其 支持诸如以下的若干特征:具有向下和向上适应(adaption)到呈现 配置的单个清单,即,延迟呈现和启用可用扬声器的最佳使用;改善 的声音包封,包括优化的下混来避免声道间相关;通过引导通过 (steer-thru)阵列(例如,动态地分配给环绕阵列内的一个或更多个 扬声器的音频对象)的增大的空间分辨率;以及对于可替代的呈现方 法的支持。The cinema sound format and processing system described in this application (also referred to as an "adaptive audio system") utilizes new spatial audio description and rendering techniques to enable enhanced audience immersion, greater artistic control, system flexibility and scalability, and ease of installation and maintenance. Embodiments of the cinema audio platform include several discrete components, including a mixing tool, a packer/encoder, an unpacker/decoder, in-theater final mixing and rendering components, new speaker designs, and networked amplifiers. The system includes recommendations for new channel configurations to be used by content creators and exhibitors. The system utilizes a model-based description that supports several features, such as: a single manifest with downward and upward adaptation to rendering configurations, i.e., delaying rendering and enabling optimal use of available speakers; improved sound encapsulation, including optimized downmixing to avoid inter-channel correlation; increased spatial resolution through steer-through arrays (e.g., audio objects dynamically assigned to one or more speakers within a surround array); and support for alternative rendering methods.
图1是按照一个实施例的利用自适应音频系统的音频创建和回 放环境的最高一级的概述。如图1所示,综合的、端到端系统100包 括在众多数量的端点装置和使用情况之上的内容创建、封装、分发和 回放/呈现组件。整个系统100开始于来自及用于许多不同使用情况 捕获的内容,该使用情况包括不同的用户体验112。内容捕获元件102 包括例如电影、TV、实况广播、用户产生的内容、记录的内容、游戏、 音乐等,并且可以包括音频/视觉或者纯音频内容。当内容通过系统 100从捕获阶段102进展到最后用户体验112时,该内容通过离散系 统组件穿过若干关键处理步骤。这些处理步骤包括音频的预处理104, 创作工具和处理106,通过捕获例如音频数据、额外的元数据和再现 信息以及对象声道的音频编解码器108编码。各种处理效果(诸如压 缩(有损的或者无损的)、加密等)可以被应用于对象声道以用于通 过各种介质有效和安全地分发。适当的端点特定的解码和呈现处理 110然后被应用以便再现和传送特别的自适应音频用户体验112。用 户体验112代表音频或者音频/视觉内容通过适当的扬声器和回放装 置的回放,并且可以代表在其中收听者正在体验捕获的内容的回放的 任意环境,诸如电影院、音乐厅、露天剧场、家庭或者房间、收听棚(listening booth)、车、游戏控制台、头戴耳机或者耳机系统、扩音 (PA)系统、或者任意其它回放环境。FIG1 provides a top-level overview of an audio creation and playback environment utilizing an adaptive audio system, according to one embodiment. As shown in FIG1 , a comprehensive, end-to-end system 100 includes content creation, packaging, distribution, and playback/presentation components across a wide variety of endpoint devices and use cases. The overall system 100 begins with content captured from and for a variety of different use cases, including diverse user experiences 112. Content capture components 102 include, for example, movies, TV, live broadcasts, user-generated content, recorded content, games, music, and more, and can include audio/visual or audio-only content. As content progresses through the system 100 from the capture stage 102 to the final user experience 112, it passes through several key processing steps via discrete system components. These processing steps include audio pre-processing 104, authoring tools and processing 106, and encoding via an audio codec 108 that captures, for example, audio data, additional metadata and rendering information, and object channels. Various processing effects (such as compression (lossy or lossless), encryption, etc.) can be applied to the object channels for efficient and secure distribution over various media. Appropriate endpoint-specific decoding and rendering processing 110 is then applied to reproduce and deliver a particular adaptive audio user experience 112. User experience 112 represents the playback of audio or audio/visual content through appropriate speakers and playback devices and can represent any environment in which a listener is experiencing playback of captured content, such as a movie theater, concert hall, amphitheater, home or room, listening booth, car, game console, headphones or headphone system, public address (PA) system, or any other playback environment.
系统100的实施例包括音频编解码器108,其能够有效分发和存 储多声道的音频节目,并且因此可以被称为'混合'编解码器。编解码 器108将传统的基于声道的音频数据与关联的元数据结合来产生便于 音频的创建和递送的音频对象,该音频被改编和优化以用于在或许与 混合环境不同的环境中呈现和回放。这允许声音工程师对关于基于收 听者的实际收听环境最终音频应该如何被收听者听到的他或她的意 图进行编码。This embodiment of system 100 includes an audio codec 108 that is capable of efficiently distributing and storing multi-channel audio programs and, therefore, can be referred to as a 'hybrid' codec. Codec 108 combines traditional channel-based audio data with associated metadata to produce audio objects that facilitate the creation and delivery of audio adapted and optimized for presentation and playback in an environment that may differ from the mixing environment. This allows the sound engineer to encode his or her intent regarding how the final audio should be heard by the listener based on the listener's actual listening environment.
传统的基于声道的音频编解码器在音频节目将由相对于收听者 预定的位置中的扬声器的阵列再现的假设之下操作。为了创建完整的 多声道的音频节目,声音工程师典型地混合大量的分离的音频流(例 如,对话、音乐、效果)来创建整体期望的印象。通过收听由预定位 置中的扬声器阵列(例如,特定的剧场中的特别的5.1或者7.1系统) 再现的音频节目,典型地做出音频混合决定。最终混合的信号用作到 音频编解码器的输入。对于再现,只有当扬声器被放置在预定位置中 时空间上准确的声场才被实现。Traditional channel-based audio codecs operate under the assumption that an audio program will be reproduced by an array of speakers in predetermined positions relative to the listener. To create a complete multi-channel audio program, sound engineers typically mix a large number of separate audio streams (e.g., dialogue, music, effects) to create the overall desired impression. Audio mixing decisions are typically made by listening to the audio program reproduced by an array of speakers in predetermined positions (e.g., a particular 5.1 or 7.1 system in a particular theater). The resulting mixed signal serves as input to the audio codec. For reproduction, a spatially accurate sound field is achieved only when the speakers are placed in the predetermined positions.
新形式的音频编码(称为音频对象编码)以分离的音频流的形式 提供不同的声源(音频对象)作为到编码器的输入。音频对象的示例 包括对话轨道、单个乐器、单独的声音效果、和其它点源。每个音频 对象与空间参数关联,该空间参数可以包括但不限于,声音位置、声 音宽度和速度信息。音频对象和关联的参数然后被编码以用于分发和 存储。最终音频对象混合和呈现在音频分发链的接收端处被执行,作 为音频节目回放的部分。这个步骤可以基于对实际扬声器位置的认 识,使得结果是对于用户特定的收听条件可定制的音频分发系统。两 种编码形式(基于声道的和基于对象的)针对不同的输入信号条件最 佳地执行。基于声道的音频编码器对于对包含不同的音频源的密集的 混合的输入信号编码以及对于散射声通常更有效。相反地,对于对少 量的高度定向性声源编码,音频对象编码器更有效。A new form of audio coding (called audio object coding) provides different sound sources (audio objects) as input to the encoder in the form of separate audio streams. Examples of audio objects include dialogue tracks, individual instruments, separate sound effects, and other point sources. Each audio object is associated with spatial parameters, which may include, but are not limited to, sound location, sound width, and speed information. The audio objects and associated parameters are then encoded for distribution and storage. Final audio object mixing and rendering is performed at the receiving end of the audio distribution chain as part of the audio program playback. This step can be based on knowledge of the actual speaker positions, resulting in an audio distribution system that is customizable for the user's specific listening conditions. Both coding forms (channel-based and object-based) perform optimally for different input signal conditions. Channel-based audio encoders are generally more effective for encoding dense mixtures of different audio sources and for diffuse sound. Conversely, audio object encoders are more effective for encoding a small number of highly directional sound sources.
在一个实施例中,系统100的组件和方法包括音频编码、分发和 解码系统,其被配置为产生包含传统的基于声道的音频元素和音频对 象编码元素两者的一个或更多个比特流。与分别采取的基于声道的方 法或者基于对象的方法相比,这种结合的方法提供更大的编码效率和 呈现灵活性。In one embodiment, the components and methods of system 100 include an audio encoding, distribution, and decoding system configured to generate one or more bitstreams containing both traditional channel-based audio elements and audio object-coded elements. This combined approach provides greater coding efficiency and presentation flexibility than either the channel-based or object-based approaches taken separately.
描述的实施例的其它方面包括以向后可兼容的方式扩展预定义 的基于声道的音频编解码器以便包括音频对象编码元素。包含音频对 象编码元素的新的'扩展层'被定义和添加到基于声道的音频编解码器 比特流的'基本(base)'或者'向后可兼容的'层。这个方法启用一个或 更多个比特流,其包括要由遗留(legacy)解码器处理的扩展层,而 同时利用新的解码器为用户提供增强的收听者体验。增强的用户体验 的一个示例包括音频对象呈现的控制。这个方法的额外的优点是音频 对象可以在不解码/混合/重新编码用基于声道的音频编解码器编码的 多声道的音频的情况下在沿着分发链的任何地方被添加或者修改。Other aspects of the described embodiments include extending a predefined channel-based audio codec in a backward-compatible manner to include audio object coding elements. A new 'extension layer' containing the audio object coding elements is defined and added to the 'base' or 'backward-compatible' layer of the channel-based audio codec bitstream. This approach enables one or more bitstreams that include the extension layer to be processed by legacy decoders while simultaneously utilizing new decoders to provide users with an enhanced listener experience. An example of an enhanced user experience includes control over audio object presentation. An additional advantage of this approach is that audio objects can be added or modified anywhere along the distribution chain without decoding/mixing/re-encoding multi-channel audio encoded with a channel-based audio codec.
关于参考系,音频信号的空间效果在为收听者提供沉浸体验方面 是关键的。打算从观看屏幕或者房间的特定区域发出的声音应该通过 位于相同相对位置处的扬声器(多个扬声器)播放。因此,在基于模 式的描述中的声音事件的主要的音频元数据是位置,但是也可以描述 其它参数,诸如尺寸、取向、速度和声散。为了传送位置,基于模式 的、3D、音频空间描述要求3D坐标系统。用于发送的坐标系(欧几 里得(Euclidean)、球面等)通常为了方便或者简洁起见被选择,然 而,其它坐标系可以被用于呈现处理。除了坐标系之外,还要求参考 系来代表对象在空间中的位置。对于用于在各种不同的环境中准确地 再现基于位置的声音的系统,选择正确的参考系可以是关键因素。利 用非自我中心的参考系,音频源位置相对于呈现环境内的特征(诸如 房间壁和角落、标准扬声器位置和屏幕位置)被定义。在自我中心的 参考系中,相对于收听者的视角来表示位置,诸如“在我前方,稍微 向左”等等。空间感知(音频及其他)的科学研究已经示出了几乎到 处使用自我中心的视角。然而对于电影院,出于若干原因非自我中心 通常是更适合的。例如,当在屏幕上存在关联对象时音频对象的精确 的位置是最重要的。使用非自我中心的参考,对于每个收听位置,并 且对于任意屏幕尺寸,声音将定位在屏幕上的相同的相对位置处,例 如,屏幕的中间向左三分之一处。另一个原因是混合者倾向于以非自 我中心方面来思考并且混合,并且以非自我中心的框架(房间壁)来 布局摇移工具,并且混合者期望它们那样被呈现,例如,这个声音应该在屏幕上,这个声音应该在屏幕外,或者来自左壁等。With respect to reference frames, the spatial effects of audio signals are critical in providing an immersive experience for the listener. Sounds intended to emanate from a specific area of a viewing screen or room should be played through speakers (multiple speakers) located at the same relative position. Therefore, the primary audio metadata for a sound event in a pattern-based description is position, but other parameters such as size, orientation, velocity, and acoustic dispersion can also be described. In order to convey position, a pattern-based, 3D, audio spatial description requires a 3D coordinate system. The coordinate system used for transmission (Euclidean, spherical, etc.) is usually chosen for convenience or simplicity, however, other coordinate systems can be used for rendering processing. In addition to the coordinate system, a reference system is also required to represent the position of objects in space. For systems used to accurately reproduce position-based sound in a variety of different environments, choosing the right reference system can be a key factor. Using a non-egocentric reference system, the audio source position is defined relative to features within the rendering environment (such as room walls and corners, standard speaker positions, and screen positions). In an egocentric reference frame, positions are expressed relative to the listener's perspective, such as "in front of me, slightly to the left," etc. Scientific research on spatial perception (audio and other) has shown that the egocentric perspective is used almost everywhere. However, for cinema, allocentric is generally more suitable for several reasons. For example, the precise position of audio objects is most important when there are associated objects on the screen. With an allocentric reference, for every listening position and for any screen size, the sound will be positioned at the same relative position on the screen, for example, the middle left third of the screen. Another reason is that mixers tend to think and mix in allocentric terms and lay out panning tools in an allocentric framework (room walls), and they expect them to be presented that way, for example, this sound should be on the screen, this sound should be off the screen, or coming from the left wall, etc.
尽管在电影院环境中使用非自我中心的参考系,但是存在其中自 我中心的参考系可以有用且更合适的一些情况。这些包括非剧情声 音,即,不存在于“故事空间”中的那些声音,例如,气氛音乐,对 于其自我中心地均匀的表现可以是期望的。另一种情况是要求自我中 心的表示的近场效果(例如,在收听者的左耳中的嗡嗡的蚊子)。目 前不存在在不使用头戴耳机(headphones)或者非常近场的扬声器的 情况下呈现这种声场的手段。另外,无限远的声源(和结果得到的平 面波)看起来来自恒定的自我中心的位置(例如,向左转30度), 并且与按照非自我中心相比,这种声音更易于按照自我中心来描述。While allocentric reference frames are used in cinema environments, there are some situations where an egocentric reference frame can be useful and more appropriate. These include non-diegetic sounds, i.e., those that do not exist in the "story space," such as mood music, for which an egocentrically uniform representation may be desirable. Another situation is near-field effects that require an egocentric representation (e.g., a mosquito buzzing in the listener's left ear). Currently, there is no means of presenting such sound fields without the use of headphones or very near-field speakers. Additionally, infinitely distant sound sources (and resulting plane waves) appear to come from a constant egocentric position (e.g., 30 degrees to the left), and such sounds are easier to describe egocentrically than allocentrically.
在一些情况中,只要标称收听位置被定义就可以使用非自我中心 的参考系,但是一些示例要求还不可以呈现的自我中心的表示。虽然 非自我中心的参考可以是更有用的和合适的,但是音频表示应该是可 扩展的,因为许多新的特征(包括自我中心的表示)在特定应用和收 听环境中可以是更期望的。自适应音频系统的实施例包括混合空间描 述方法,其包括用于最佳的保真度和用于使用自我中心的参考呈现扩 散或者复杂的、多点源(例如,体育场人群、环境)的推荐声道配置, 加上非自我中心的、基于模式的声音描述以便有效地使得能够有增大 的空间分辨率和可缩放性。In some cases, an allocentric reference frame can be used as long as a nominal listening position is defined, but some examples require an egocentric representation that cannot yet be rendered. While an allocentric reference can be more useful and appropriate, the audio representation should be extensible because many new features (including egocentric representations) may be more desirable in specific applications and listening environments. An embodiment of an adaptive audio system includes a hybrid spatial description method that includes recommended channel configurations for optimal fidelity and for rendering diffuse or complex, multi-point sources (e.g., stadium crowds, environments) using an egocentric reference, plus an allocentric, pattern-based sound description to effectively enable increased spatial resolution and scalability.
系统组件System components
参考图1,来自内容捕获元件102的原始声音内容数据首先在预 处理块104中被处理。系统100的预处理块104包括对象声道滤波组 件。在很多情况下,音频对象包含用于启用声音的独立的摇移的单独 的声源。在一些情况下,诸如当使用自然的或者“制作”声音创建音 频节目时,从包含多个声源的记录中提取单独的声音对象可以是必需 的。实施例包括用于将独立源信号与更复杂信号隔离开的方法。要与 独立源信号分离的不期望的元素可以包括但不限于,其它独立的声源 和背景噪声。另外,混响可以被去除以便恢复″干(dry)″声源。1 , the raw sound content data from the content capture element 102 is first processed in a pre-processing block 104. The pre-processing block 104 of the system 100 includes an object channel filtering component. In many cases, an audio object contains a separate sound source for enabling independent panning of the sound. In some cases, such as when creating an audio program using natural or "production" sound, it may be necessary to extract a separate sound object from a recording containing multiple sound sources. An embodiment includes a method for isolating an independent source signal from a more complex signal. Undesirable elements to be separated from the independent source signal may include, but are not limited to, other independent sound sources and background noise. In addition, reverberation can be removed to restore a "dry" sound source.
预处理器104还包括源分离和内容类型检测功能。系统通过输入 音频的分析提供元数据的自动产生。通过分析声道对之间的相关输入 的相对水平从多声道记录导出位置元数据。可以例如通过特征提取和 分类来实现内容类型(诸如“讲话”或者“音乐”)的检测。Pre-processor 104 also includes source separation and content type detection functionality. The system provides automatic metadata generation through analysis of the input audio. Positional metadata is derived from multi-channel recordings by analyzing the relative levels of the inputs between pairs of channels. Detection of content type (such as "speech" or "music") can be achieved, for example, through feature extraction and classification.
创作工具Creation Tools
创作工具块106包括用于通过优化声音工程师的创作意图的输 入和编纂(codification)来改善音频节目的创作以允许他一次创建针 对实际上任意回放环境中的回放被优化的最终音频混合的特征。这通 过使用与原始的音频内容关联且编码的位置数据和音频对象而被实 现。为了将声音准确地放置在观众席周围,声音工程师需要控制声音将如何基于实际约束和回放环境的特征最终被呈现。自适应音频系统 通过允许声音工程师通过使用音频对象和位置数据改变如何设计和 混合音频内容来提供这个控制。The authoring tools block 106 includes features for improving the creation of audio programs by optimizing the sound engineer's input and codification of their creative intent, allowing them to create a final audio mix optimized for playback in virtually any playback environment. This is achieved by using positional data and audio objects associated with and encoded with the original audio content. To accurately place sounds around the audience, the sound engineer needs to control how the sounds will ultimately be rendered based on the practical constraints and characteristics of the playback environment. The adaptive audio system provides this control by allowing the sound engineer to change how the audio content is designed and mixed using audio objects and positional data.
音频对象可以被认为是多组声音元素,其可以被感知为从观众席 中的特别的物理位置或者多个位置发出。这种对象可以是静态的,或 者它们可以移动。在自适应音频系统100中,音频对象由元数据控制, 该元数据详述给定时间点处的声音的位置等等。当对象在剧场中被监 视或者回放时,它们根据位置元数据通过使用存在的扬声器被呈现, 而不是必须被输出到物理声道。会话中的轨道可以是音频对象,并且 标准的摇移数据类似于位置元数据。以这种方式,位于屏幕上的内容 可能以与基于声道的内容相同的方式有效地摇移,但是位于环绕中的 内容可以在需要时被呈现到单独的扬声器。虽然音频对象的使用为离 散效果提供期望的控制,但是电影音轨的其它方面在基于声道的环境 中的确有效地工作。例如,许多环境效果或者混响实际上受益于被供 给到扬声器阵列。虽然这些可以被处理为具有足够宽度以填充阵列的 对象,但是保留一些基于声道的功能是有益的。Audio objects can be thought of as groups of sound elements that are perceived as emanating from a specific physical location or locations within the auditorium. Such objects can be static or they can move. In the adaptive audio system 100, audio objects are controlled by metadata that details, among other things, the location of the sound at a given point in time. When the objects are monitored or played back in the theater, they are rendered using the available speakers based on the positional metadata, rather than necessarily being output to physical channels. Tracks in a session can be audio objects, and standard panning data is analogous to positional metadata. In this way, content located on screen can be effectively panned in the same manner as channel-based content, while content located in surrounds can be rendered to separate speakers as needed. While the use of audio objects provides desirable control for discrete effects, other aspects of movie soundtracks do work effectively in a channel-based environment. For example, many ambient effects or reverberations actually benefit from being fed to a speaker array. While these can be processed as objects with sufficient width to fill the array, retaining some channel-based functionality is beneficial.
在一个实施例中,自适应音频系统除了音频对象之外还支持“基 础(bed)”,其中基础是有效地基于声道的子混合或者主干(stem)。 这些可以独立地或者结合成单个基础地被传递以用于最终回放(呈 现),取决于内容创作者的意图。这些基础可以被创建在不同的基于 声道的配置(诸如5.1、7.1)中,并且可扩展到更广泛的格式,诸如 9.1,以及包括头上的扬声器的阵列。In one embodiment, the adaptive audio system supports "beds" in addition to audio objects, where a bed is effectively a channel-based submix or stem. These can be delivered independently or combined into a single bed for final playback (rendering), depending on the content creator's intent. These beds can be created in different channel-based configurations (such as 5.1, 7.1) and are extensible to a wider range of formats, such as 9.1, and arrays including overhead speakers.
图2示出按照一个实施例的声道和基于对象的数据的组合以便 产生自适应音频混合。如处理200所示,基于声道的数据202(其例 如可以是以脉冲编码调制的(PCM)数据形式提供的5.1或者7.1环 绕声数据)与音频对象数据204结合以便产生自适应音频混合208。音频对象数据204通过将原始的基于声道的数据的元素与指定关于音 频对象的位置的特定参数的关联元数据结合来被产生。FIG2 illustrates the combination of channel and object-based data to produce an adaptive audio mix, according to one embodiment. As shown in process 200, channel-based data 202 (which may be, for example, 5.1 or 7.1 surround sound data provided in the form of pulse code modulated (PCM) data) is combined with audio object data 204 to produce an adaptive audio mix 208. The audio object data 204 is produced by combining elements of the original channel-based data with associated metadata that specifies specific parameters regarding the location of audio objects.
如图2中概念上所示出的,创作工具提供创建音频节目的能力, 该音频节目同时包含对象声道和扬声器声道组的组合。例如,音频节 目可以包含可选地组织成组的一个或更多个扬声器声道(或者轨道, 例如立体声或者5.1轨道)、用于一个或更多个扬声器声道的描述元 数据、一个或更多个对象声道、以及用于一个或更多个对象声道的描 述元数据。在一个音频节目内,每个扬声器声道组以及每个对象声道 可以通过使用一个或更多个不同的采样率被表示。例如,数字电影(D 电影)应用支持48kHz和96kHz采样率,但是还可以支持其它采样 率。此外,还可以支持具有不同的采样率的声道的摄取(ingest)、 存储和编辑。As conceptually illustrated in FIG2 , the authoring tool provides the ability to create audio programs that contain a combination of object channels and speaker channel groups. For example, an audio program may contain one or more speaker channels (or tracks, such as stereo or 5.1 tracks), optionally organized into groups, descriptive metadata for the one or more speaker channels, one or more object channels, and descriptive metadata for the one or more object channels. Within an audio program, each speaker channel group and each object channel can be represented using one or more different sampling rates. For example, digital cinema (D-cinema) applications support 48 kHz and 96 kHz sampling rates, but other sampling rates may also be supported. Furthermore, the ingestion, storage, and editing of channels with different sampling rates may also be supported.
音频节目的创建要求声音设计的步骤,其包括结合声音元素作为 水平调整的构成声音元素的和以便创建新的期望的声音效果。自适应 音频系统的创作工具使得能够使用空间-视觉的声音设计图形用户界 面创建声音效果作为具有相对位置的声音对象的集合。例如,声音产 生对象(例如,汽车)的视觉表示可以被用作用于组装音频元素(排 气音调(exhaust note)、轮胎哼鸣(hum)、发动机噪声)作为包含 声音和合适的空间位置(在尾管、轮胎、机罩(hood)处)的对象声 道的模板。然后单独的对象声道可以作为整体被链接和操纵。创作工 具106包括若干用户接口元素以便允许声音工程师输入控制信息和观 看混合参数,并且改善系统功能。声音设计和创作处理通过允许对象 声道和扬声器声道作为整体被链接和操纵而也被改善。一个示例是将 具有离散、干声源的对象声道与包含关联的混响信号的一组扬声器声 道结合。The creation of an audio program requires a sound design step, which includes combining sound elements as a sum of constituent sound elements with adjusted levels to create a new desired sound effect. The authoring tool of the adaptive audio system enables the creation of sound effects as a collection of sound objects with relative positions using a spatial-visual sound design graphical user interface. For example, a visual representation of a sound-producing object (e.g., a car) can be used as a template for assembling audio elements (exhaust note, tire hum, engine noise) as object channels containing sounds and appropriate spatial positions (at the tailpipe, tire, hood). The individual object channels can then be linked and manipulated as a whole. The authoring tool 106 includes several user interface elements to allow the sound engineer to input control information and view mixing parameters, and improve system functionality. The sound design and authoring process is also improved by allowing object channels and speaker channels to be linked and manipulated as a whole. An example is combining an object channel with a discrete, dry sound source with a set of speaker channels containing associated reverberation signals.
音频创作工具106支持结合多个音频声道(通常被称为混合)的 能力。多个混合方法被支持并且可以包括传统的基于水平的混合和基 于响度的混合。在基于水平的混合中,宽带缩放(scaling)被应用于 音频声道,并且缩放后的音频声道然后被一起求和。用于每个声道的 宽带缩放因子被选择以便控制结果得到的混合的信号的绝对水平,以 及混合的信号内的混合的声道的相对水平。在基于响度的混合中,一 个或更多个输入信号通过使用依赖频率的振幅缩放被修改,其中依赖 频率的振幅被选择以便提供期望的感知的绝对和相对响度,而同时保 持输入声音的感知的音色。The audio authoring tool 106 supports the ability to combine multiple audio channels (commonly referred to as mixing). Multiple mixing methods are supported and can include traditional level-based mixing and loudness-based mixing. In level-based mixing, broadband scaling is applied to the audio channels, and the scaled audio channels are then summed together. The broadband scaling factor for each channel is selected to control the absolute level of the resulting mixed signal, as well as the relative levels of the mixed channels within the mixed signal. In loudness-based mixing, one or more input signals are modified using frequency-dependent amplitude scaling, where the frequency-dependent amplitude is selected to provide the desired perceived absolute and relative loudness while maintaining the perceived timbre of the input sounds.
创作工具允许创建扬声器声道和扬声器声道组的能力。这允许元 数据与每个扬声器声道组关联。每个扬声器声道组可以根据内容类型 被加标签。内容类型可经由文本描述扩展。内容类型可以包括但不限 于,对话、音乐和效果。每个扬声器声道组可以被分配关于如何从一 个声道配置上混(upmix)到另一个的唯一的指令,其中上混被定义 为从N个声道创建M个音频声道,其中M>N。上混指令可以包括 但不限于以下:用于指示是否容许上混的启用/禁用标志;用于控制每 个输入和输出声道之间的映射的上混矩阵;并且默认启用和矩阵设定 可以基于内容类型被分配,例如,仅仅对于音乐启用上混。每个扬声 器声道组也可以被分配关于如何从一个声道配置下混(downmix)到 另一个的唯一的指令,其中下混被定义为从X个声道创建Y个音频 声道,其中Y<X。下混指令可以包括但不限于以下:用于控制每个 输入和输出声道之间的映射的矩阵;并且默认矩阵设定可以基于内容 类型被分配,例如,对话应该下混到屏幕上;效果应该下混离开屏幕。 每个扬声器声道也可以与用于在呈现期间禁用低音管理的元数据标 志关联。The authoring tool allows for the creation of speaker channels and speaker channel groups. This allows metadata to be associated with each speaker channel group. Each speaker channel group can be tagged based on content type. Content types can be expanded via textual descriptions. Content types can include, but are not limited to, dialogue, music, and effects. Each speaker channel group can be assigned unique instructions for upmixing from one channel configuration to another, where upmixing is defined as creating M audio channels from N channels, where M>N. Upmix instructions can include, but are not limited to, the following: an enable/disable flag indicating whether upmixing is allowed; an upmix matrix controlling the mapping between each input and output channel; and default enable and matrix settings can be assigned based on content type, e.g., enabling upmixing only for music. Each speaker channel group can also be assigned unique instructions for downmixing from one channel configuration to another, where downmixing is defined as creating Y audio channels from X channels, where Y<X. Downmix instructions may include, but are not limited to, a matrix for controlling the mapping between each input and output channel; and default matrix settings may be assigned based on content type, e.g., dialogue should be downmixed onto the screen; effects should be downmixed off the screen. Each speaker channel may also be associated with a metadata flag for disabling bass management during presentation.
实施例包括使得能够创建对象声道和对象声道组的特征。本发明 允许元数据与每个对象声道组关联。每个对象声道组可以根据内容类 型被加标签。内容类型是可扩展的经由文本描述,其中内容类型可以 包括但不限于对话、音乐和效果。每个对象声道组可以被分配用于描 述应该如何呈现一个或多个对象的元数据。Embodiments include features that enable the creation of object channels and object channel groups. The present invention allows metadata to be associated with each object channel group. Each object channel group can be tagged according to content type. Content type is extensible and described via text, where content types may include, but are not limited to, dialogue, music, and effects. Each object channel group can be assigned metadata describing how one or more objects should be rendered.
位置信息被提供以便指示期望的表观源位置。位置可以通过使用 自我中心的或非自我中心的参考系被指示。在源位置要涉及收听者时 自我中心的参考是合适的。对于自我中心的位置,球面坐标对于位置 描述是有用的。非自我中心的参考对于其中相对于表现环境中的对象 (诸如视觉显示屏幕或房间边界)提及源位置的电影或其它音频/视觉 表现是典型的参考系。三维(3D)轨迹信息被提供以便使得能够进行 位置的内插或用于使用其它呈现决定,诸如使得能够进行“快移 (snap)到模式”。尺寸信息被提供以便指示期望的表观感知的音频 源尺寸。Position information is provided to indicate the desired apparent source position. Position can be indicated using either an egocentric or allocentric reference system. Egocentric reference is appropriate when the source position is to be relative to the listener. For egocentric positions, spherical coordinates are useful for position description. Allocentric reference is a typical reference system for movies or other audio/visual presentations in which source positions are referred to relative to objects in the presentation environment, such as a visual display screen or room boundaries. Three-dimensional (3D) trajectory information is provided to enable interpolation of position or for use in other rendering decisions, such as enabling a "snap to mode." Size information is provided to indicate the desired apparent perceived size of the audio source.
空间量子化通过“快移到最接近扬声器”控制被提供,该控制由 声音工程师或混合者指示意图以便具有由正好一个扬声器呈现的对 象(对空间精度有一些可能的牺牲)。对允许的空间失真的限制可以 通过仰角(elevation)和方位角(azimuth)容限阈值被指示,使得如 果超过阈值则不会出现“快移”功能。除了距离阈值之外,交叉衰落(crossfade)速率参数也可以被指示,以便在期望的位置在扬声器之 间交叉时控制移动对象将如何快速地从一个扬声器转变或跳变到另 一个。Spatial quantization is provided via a "snap to nearest speaker" control, which is indicated by the sound engineer or mixer to have an object rendered by exactly one speaker (with some possible sacrifice in spatial accuracy). Limits on the allowed spatial distortion can be indicated via elevation and azimuth tolerance thresholds, so that if the thresholds are exceeded, the "snap" function will not occur. In addition to the distance thresholds, a crossfade rate parameter can also be indicated to control how quickly a moving object will transition or jump from one speaker to another when the desired position crosses between speakers.
在一个实施例中,依赖的空间元数据被用于特定位置元数据。例 如,元数据可以通过将其与从属对象要跟随的“主控”对象关联来对 于“从属”对象被自动产生。时滞或相对速度可以被分配给从属对象。 机构也可以被提供以便允许对于多组或多群对象的重力的声中心的 定义,使得对象可以被呈现使得它被感知为围绕另一个对象移动。在 这种情况下,一个或更多个对象可以围绕对象或定义的区域(诸如主 导点或房间的干区域)旋转。即使最终的位置信息将被表示为相对于 房间的位置,与相对于另一个对象的位置相反,重力的声中心然后也 将被用在呈现阶段中以便帮助确定对于每个合适的基于对象的声音的位置信息。In one embodiment, dependent spatial metadata is used for specific positional metadata. For example, metadata can be automatically generated for "slave" objects by associating them with a "master" object that the slave object is to follow. Time lags or relative velocities can be assigned to the slave objects. Mechanisms can also be provided to allow the definition of an acoustic center of gravity for groups or clusters of objects, so that an object can be rendered so that it is perceived as moving around another object. In this case, one or more objects can be rotated around an object or a defined area (such as a dominant point or a dry area of a room). Even if the final positional information will be expressed relative to the room, as opposed to relative to another object, the acoustic center of gravity will still be used in the rendering stage to help determine the positional information for each appropriate object-based sound.
在呈现对象时,它根据位置元数据以及回放扬声器的位置被分配 给一个或更多个扬声器。额外的元数据可以与对象关联以便限制应该 使用的扬声器。限制的使用可以禁止使用指示的扬声器或仅仅禁止指 示的扬声器(相比于否则会被应用的情况,允许更少能量到扬声器或 多个扬声器中)。要被约束的扬声器组可以包括但不限于,命名的扬 声器或扬声器区域中的任意一个(例如L、C、R等),或扬声器区 域,诸如:前壁、后壁、左壁、右壁、天花板、地板、房间内的扬声 器等等。同样地,在指定多个声音元素的期望的混合的过程中,可以 使得一个或更多个声音元素变得听不见或“被掩蔽”,由于存在其它 “掩蔽”声音元素。例如,当检测到被掩蔽的元素时,它们可以经由 图形显示器被识别给用户。When an object is rendered, it is assigned to one or more speakers based on positional metadata and the location of the playback speakers. Additional metadata can be associated with the object to restrict the speakers that should be used. The use of restrictions can prohibit the use of the indicated speakers or only the indicated speakers (allowing less energy to the speaker or speakers than would otherwise be applied). The group of speakers to be constrained can include, but is not limited to, any of named speakers or speaker zones (e.g., L, C, R, etc.), or speaker zones such as: front wall, back wall, left wall, right wall, ceiling, floor, speakers within the room, etc. Similarly, in the process of specifying a desired mix of multiple sound elements, one or more sound elements can be made inaudible or "masked" due to the presence of other "masking" sound elements. For example, when masked elements are detected, they can be identified to the user via a graphical display.
如其它地方描述的,音频节目描述可以适应于在各式各样的扬声 器设施和声道配置上呈现。当音频节目被创作时,重要的是监视在预 期的回放配置上呈现节目的效果以检验实现期望的结果。本发明包括 选择目标回放配置和监视结果的能力。另外,系统可以自动监视将在 每个预期的回放配置中被产生的最坏情况(即最高)信号水平,并且 在将出现裁剪(clipping)或限制的情况下提供指示。As described elsewhere, audio program descriptions can be adapted for presentation on a wide variety of speaker installations and channel configurations. When an audio program is authored, it is important to monitor the performance of the program on the intended playback configuration to verify that the desired results are achieved. The present invention includes the ability to select a target playback configuration and monitor the results. In addition, the system can automatically monitor the worst-case (i.e., maximum) signal level that will be produced in each intended playback configuration and provide an indication if clipping or limiting will occur.
图3是按照一个实施例的示出创建、封装和呈现自适应音频内容 的工作流程的框图。图3的工作流程300被分成标记为创建/创作、封 装和展出的三个不同的任务组。通常,图2中示出的基础和对象的混 合模型允许大多数的声音设计、编辑、预混合和最终混合以与当今相 同的方式被执行并且不向当前处理添加过多的开销。在一个实施例 中,自适应音频功能以与声音制作和处理设备结合使用的软件、固件 或电路形式被提供,其中这种设备可以是新型硬件系统或对现有的系 统的更新。例如,插电式应用可以为数字音频工作站提供以允许声音 设计和编辑内的现有的摇移技术保持不变。以这种方式,可以在5.1 或类似的环绕装备的编辑室中的工作站内铺设基础和对象两者。对象 音频和元数据被记录在会话中以准备在配音(dubbing)剧场中的预 混合和最终混合阶段。FIG3 is a block diagram illustrating a workflow for creating, packaging, and presenting adaptive audio content, according to one embodiment. The workflow 300 of FIG3 is divided into three distinct task groups labeled creation/authoring, packaging, and presentation. In general, the hybrid model of foundation and objects shown in FIG2 allows most sound design, editing, pre-mixing, and final mixing to be performed in the same manner as today and without adding excessive overhead to current processing. In one embodiment, the adaptive audio functionality is provided in the form of software, firmware, or circuitry used in conjunction with sound production and processing equipment, where such equipment can be a new hardware system or an update to an existing system. For example, a plug-in application can be provided for a digital audio workstation to allow existing panning techniques within sound design and editing to remain unchanged. In this way, both foundation and objects can be laid within a workstation in a 5.1 or similar surround-equipped editing room. Object audio and metadata are recorded in sessions in preparation for the pre-mixing and final mixing stages in a dubbing theater.
如图3所示,创建或创作任务包括通过用户(例如,在下面示例 中,声音工程师)输入混合控制302到混合控制台或音频工作站304。 在一个实施例中,元数据被集成到混合控制台表面中,允许声道条 (strips)的音量控制器(faders)、摇移和音频处理对基础或主干和 音频对象两者起作用。可以使用控制台表面或者工作站用户界面编辑 元数据,并且通过使用呈现和主控单元(RMU)306监视声音。基础 和对象音频数据以及关联的元数据在主控会话期间被记录以便创建 ‘打印主控器’,其包括自适应音频混合310和任何其它呈现的可交 付物(deliverables)(诸如环绕7.1或5.1剧场的混合)308。现有的 创作工具(例如数字音频工作站,诸如Pro工具)可以被用来允许声 音工程师标记混合会话内的单独的音频轨道。实施例通过允许用户标 记轨道内的单独的子片段以帮助发现或快速识别音频元素,来扩展这 个概念。到使得能够定义和创建元数据的混合控制台的用户界面可以 通过图形用户界面元素、物理控制(例如,滑动器和旋钮)或其任何 组合被实现。As shown in FIG3 , the creation or authoring task includes inputting mixing controls 302 by a user (e.g., in the example below, a sound engineer) into a mixing console or audio workstation 304. In one embodiment, metadata is integrated into the mixing console surface, allowing faders, panning, and audio processing of channel strips to act on both the base or stems and audio objects. Metadata can be edited using the console surface or workstation user interface, and the sound can be monitored using a rendering and mastering unit (RMU) 306. Base and object audio data and associated metadata are recorded during the mastering session to create a 'print master', which includes an adaptive audio mix 310 and any other rendered deliverables (such as a surround 7.1 or 5.1 theatrical mix) 308. Existing authoring tools (e.g., digital audio workstations such as Pro Tools) can be used to allow sound engineers to mark individual audio tracks within a mixing session. Embodiments extend this concept by allowing users to mark individual sub-clips within a track to help find or quickly identify audio elements. The user interface to the mixing console that enables the definition and creation of metadata may be implemented through graphical user interface elements, physical controls (e.g., sliders and knobs), or any combination thereof.
在封装阶段中,打印主控文件通过使用工业标准的MXF包装 (wrap)过程被包装、混编(hash)和可选地加密,以便确保用于递 送到数字电影封装设施的音频内容的完整性。这个步骤可以通过数字 电影处理器(DCP)312或任何合适的音频处理器取决于最终的回放环境(诸如标准的环绕声音装备的剧场318、自适应音频启用剧场320 或任何其它回放环境)被执行。如图3所示,处理器312根据展出环 境输出合适的音频信号314和316。During the packaging phase, the print master file is packaged, hashed, and optionally encrypted using the industry-standard MXF wrapping process to ensure the integrity of the audio content for delivery to the digital cinema packaging facility. This step can be performed by a digital cinema processor (DCP) 312 or any suitable audio processor, depending on the final playback environment (such as a standard surround sound-equipped theater 318, an adaptive audio-enabled theater 320, or any other playback environment). As shown in FIG3 , processor 312 outputs appropriate audio signals 314 and 316 depending on the exhibition environment.
在一个实施例中,自适应音频打印主控器包含自适应音频混合, 以及遵从标准的DCI的脉冲编码调制(PCM)混合。PCM混合可以 通过配音剧场中的呈现和主控单元被呈现,或通过分离的混合途径在 需要时被创建。PCM音频在数字电影处理器312内形成标准的主音 频轨道文件,并且自适应音频形成额外的轨道文件。这种轨道文件可 以遵从现有工业标准,并且被不能使用它的遵从DCI的服务器忽略。In one embodiment, the adaptive audio print master includes an adaptive audio mix and a standard DCI-compliant pulse code modulation (PCM) mix. The PCM mix can be rendered by the rendering and mastering units in the dubbing theater, or created as needed through a separate mixing path. The PCM audio forms a standard main audio track file within the digital cinema processor 312, and the adaptive audio forms an additional track file. This track file can conform to existing industry standards and be ignored by DCI-compliant servers that cannot use it.
在示例电影回放环境中,包含自适应音频轨道文件的DCP被服 务器识别为有效的封装体,并且被摄取到服务器中并且随后被流到自 适应音频电影处理器。系统具有线性的PCM和自适应音频文件两者 可用,该系统可以根据需要在它们之间切换。对于分发到展出阶段, 自适应音频封装方案允许单个类型封装体的递送被递送给电影院。 DCP封装体包含PCM和自适应音频文件两者。安全密钥(诸如密钥 递送消息(KDM))的使用可以被并入以便使得能够安全递送电影 内容或其它类似的内容。In an example cinema playback environment, a DCP containing an adaptive audio track file is recognized by the server as a valid package, ingested into the server, and subsequently streamed to the adaptive audio cinema processor. The system has both linear PCM and adaptive audio files available, switching between them as needed. For distribution to exhibition stages, the adaptive audio packaging solution allows a single type of package to be delivered to cinemas. The DCP package contains both PCM and adaptive audio files. The use of security keys, such as key delivery messages (KDMs), can be incorporated to enable secure delivery of cinema content or other similar content.
如图3所示,自适应音频方法通过使得声音工程师能够通过音频 工作站304表达关于音频内容的呈现和回放的他或她的意图而被实 现。通过控制特定输入控制,工程师能够根据收听环境指定在哪里和 如何回放音频对象和声音元素。响应于工程师的混合输入302在音频 工作站304中产生元数据以便提供呈现队列,其控制空间参数(例如, 位置、速度、强度、音色等)并且指定收听环境中的哪个扬声器(哪 些扬声器)或扬声器组在展出期间播放相应的声音。元数据与工作站 304或RMU 306中的相应的音频数据关联以用于通过DCP 312封装 和传输。As shown in Figure 3, the adaptive audio approach is implemented by enabling the sound engineer to express his or her intent regarding the presentation and playback of audio content through an audio workstation 304. By controlling specific input controls, the engineer can specify where and how audio objects and sound elements should be played back, depending on the listening environment. In response to the engineer's mixing input 302, metadata is generated in the audio workstation 304 to provide a rendering queue. This metadata controls spatial parameters (e.g., position, velocity, intensity, timbre, etc.) and specifies which speaker(s) or speaker groups in the listening environment should play the corresponding sound during the presentation. The metadata is associated with the corresponding audio data in the workstation 304 or RMU 306 for packaging and transmission via the DCP 312.
通过工程师提供工作站304的控制的软件工具和图形用户界面 至少包括图1的创作工具106的部分。The software tools and graphical user interface that provide control of the workstation 304 by the engineer include at least a portion of the authoring tool 106 of Figure 1.
混合音频编解码器Hybrid Audio Codec
如图1所示,系统100包括混合音频编解码器108。这个组件包 含音频编码、分发和解码系统,其被配置为产生包含传统的基于声道 的音频元素和音频对象编码元素两者的单个比特流。混合音频编码系 统围绕基于声道的编码系统被构建,基于声道的编码系统被配置为产 生单个(统一)比特流,其同时可与第一解码器和一个或更多个二次 解码器兼容(即,可由第一解码器和一个或更多个二次解码器解码), 第一解码器被配置为解码根据第一编码协议编码的(基于声道的)音 频数据,二次解码器被配置为解码根据一个或更多个二次编码协议编 码的(基于对象的)音频数据。比特流可以包括可由第一解码器解码 (并且被任何二次解码器忽略)的编码后的数据(以数据子帧(burst) 形式)和可由一个或更多个二次解码器解码(并且被第一解码器忽略) 的编码后的数据(例如,数据的其它子帧)两者。来自二次解码器中 的一个或更多个和第一解码器的解码后的音频和关联的信息(元数据)然后可以以使得基于声道的和基于对象的信息两者被同时呈现的 方式被结合以便再造环境的复制(facsimile)、声道、空间信息、和 呈现到混合编码系统的对象(即在三维空间或收听环境内)。As shown in FIG1 , system 100 includes a hybrid audio codec 108. This component comprises an audio encoding, distribution, and decoding system configured to produce a single bitstream containing both traditional channel-based audio elements and audio object coding elements. The hybrid audio coding system is built around a channel-based coding system configured to produce a single (unified) bitstream that is compatible with (i.e., decodable by) both a first decoder (configured to decode (channel-based) audio data encoded according to a first coding protocol) and one or more secondary decoders (configured to decode (object-based) audio data encoded according to one or more secondary coding protocols). The bitstream can include both encoded data (in the form of data bursts) that can be decoded by the first decoder (and ignored by any secondary decoders) and encoded data (e.g., other bursts of data) that can be decoded by one or more secondary decoders (and ignored by the first decoder). The decoded audio and associated information (metadata) from one or more of the secondary decoders and the first decoder can then be combined in such a way that both channel-based and object-based information are presented simultaneously to recreate a facsimile of the environment, channels, spatial information, and objects presented to the hybrid coding system (i.e., within a three-dimensional space or listening environment).
编解码器108产生包含与多组声道位置(扬声器)有关的信息和 编码的音频信息的比特流。在一个实施例中,一组声道位置是固定的 并且用于基于声道的编码协议,而另一组声道位置是自适应的并且用 于基于音频对象的编码协议,使得用于音频对象的声道配置可以随时 间而改变(取决于在声场中将对象放置在哪里)。因此,混合音频编 码系统可以携带关于用于回放的两组扬声器位置的信息,其中一组可 以是固定的并且是另一个组的子集。支持遗留编码的音频信息的装置 将解码和呈现来自固定的子集的音频信息,而能够支持更大组的装置 可以解码和呈现额外的编码的音频信息,其将是时间变化地分配给来 自更大组的不同的扬声器。此外,系统不依赖于在系统和/或装置内同 时存在的二次解码器中的一个或更多个以及第一解码器。因此,仅仅 包含支持第一协议的解码器的遗留和/或现有的装置/系统将产生完全 兼容的要经由传统的基于声道的再现系统呈现的声场。在该情况下, 混合比特流协议的未知的或不被支持的部分(或多个部分)(即,由 二次编码协议表示的音频信息)将被支持第一混合编码协议的系统或 装置解码器忽略。Codec 108 generates a bitstream containing information about multiple sets of channel positions (speakers) and encoded audio information. In one embodiment, one set of channel positions is fixed and used for a channel-based coding protocol, while the other set of channel positions is adaptive and used for an audio object-based coding protocol, so that the channel configuration for an audio object can change over time (depending on where the object is placed in the sound field). Therefore, a hybrid audio coding system can carry information about two sets of speaker positions for playback, one of which can be fixed and a subset of the other. Devices that support legacy coded audio information will decode and present the audio information from the fixed subset, while devices that can support a larger set can decode and present additional coded audio information, which will be distributed over time to different speakers from the larger set. In addition, the system does not rely on one or more secondary decoders and the primary decoder existing simultaneously within the system and/or device. Therefore, legacy and/or existing devices/systems that only include decoders that support the first protocol will produce a fully compatible sound field to be presented via a traditional channel-based reproduction system. In this case, the unknown or unsupported part(s) of the hybrid bitstream protocol (i.e., the audio information represented by the secondary encoding protocol) will be ignored by a system or device decoder supporting the first hybrid encoding protocol.
在另一实施例中,编解码器108被配置为操作在如下的模式中, 该模式中第一编码子系统(支持第一协议)包含在混合编码器内存在 的二次编码器子系统中的一个或更多个以及第一编码器两者中表示 的所有声场信息(声道和对象)的结合的表示。这确保混合比特流包 括通过允许在仅仅支持第一协议的解码器内呈现和表示音频对象(典 型地在一个或更多个二次编码器协议中携带)而与仅仅支持第一编码 器子系统的协议的解码器的向后兼容性。In another embodiment, the codec 108 is configured to operate in a mode in which the first encoding subsystem (supporting the first protocol) contains a combined representation of all sound field information (channels and objects) represented in both one or more of the secondary encoder subsystems present in the hybrid encoder and the first encoder. This ensures that the hybrid bitstream includes backward compatibility with decoders that only support the protocol of the first encoder subsystem by allowing audio objects (typically carried in one or more secondary encoder protocols) to be rendered and represented in decoders that only support the first protocol.
在又一个实施例中,编解码器108包括两个或更多个编码子系 统,其中这些子系统中的每一个被配置为根据不同协议编码音频数 据,并且被配置为结合子系统的输出以产生混合格式(统一的)比特 流。In yet another embodiment, the codec 108 includes two or more encoding subsystems, wherein each of these subsystems is configured to encode audio data according to a different protocol, and is configured to combine the outputs of the subsystems to produce a mixed-format (unified) bitstream.
实施例的好处之一是在宽范围的内容分发系统之上运送混合编 码的音频比特流的能力,其中分发系统中的每一个传统地仅仅支持根 据第一编码协议编码的数据。这消除了对任何系统和/或传输级别协议 进行修改/改变以便特定地支持混合编码系统的需要。One benefit of an embodiment is the ability to transport hybrid-encoded audio bitstreams over a wide range of content distribution systems, each of which traditionally supports only data encoded according to a first encoding protocol. This eliminates the need to modify/change any system and/or transport-level protocols specifically to support a hybrid encoding system.
音频编码系统典型地利用标准化的比特流元素以便使得能够在 比特流本身内传输额外的(任意的)数据。这个额外的(任意的)数 据在包括在比特流内的编码的音频的解码期间典型地被跳过(即,忽 略),但是可以被用于除解码以外的目的。不同的音频编码标准通过 使用唯一的命名法(nomenclature)表示这些额外的数据字段。这个 一般类型的比特流元素可以包括但不限于,辅助数据、跳越字段、数 据流元素、填充元素、补助的数据、以及子流(substream)元素。 除非另有说明,否则这个文档中的表述“辅助数据”的使用并不暗示 特定类型或格式的额外数据,而是应该被解释为包含与本发明关联的 任何或所有示例的通用表述。Audio coding systems typically utilize standardized bitstream elements to enable the transmission of additional (arbitrary) data within the bitstream itself. This additional (arbitrary) data is typically skipped (i.e., ignored) during the decoding of the coded audio included in the bitstream, but can be used for purposes other than decoding. Different audio coding standards represent these additional data fields by using unique nomenclature. This general type of bitstream element can include, but is not limited to, auxiliary data, skip fields, data stream elements, filler elements, supplementary data, and substream elements. Unless otherwise noted, the use of the expression "auxiliary data" in this document does not imply additional data of a specific type or format, but should be interpreted as a general expression comprising any or all examples associated with the present invention.
经由结合的混合编码系统比特流内的第一编码协议的“辅助的” 比特流元素启用的数据通道可以携带一个或更多个二次(独立的或依 赖的)音频比特流(根据一个或更多个二次编码协议被编码)。一个 或更多个二次音频比特流可以被分割成N样本块并且多路复用到第 一比特流的“辅助数据”字段中。第一比特流可由合适的(互补)解 码器解码。另外,第一比特流的辅助数据可以被提取,被再结合到一 个或更多个二次音频比特流中,由支持二次比特流中的一个或更多个 的语法的处理器解码,并且随后被结合并且一起或独立地呈现。此外, 还可以将第一和第二比特流的作用颠倒,使得第一比特流的数据的块 被多路复用到第二比特流的辅助数据中。The data channel enabled via the "auxiliary" bitstream element of the first coding protocol within the combined hybrid coding system bitstream can carry one or more secondary (independent or dependent) audio bitstreams (encoded according to one or more secondary coding protocols). One or more secondary audio bitstreams can be divided into N sample blocks and multiplexed into the "auxiliary data" field of the first bitstream. The first bitstream can be decoded by a suitable (complementary) decoder. In addition, the auxiliary data of the first bitstream can be extracted, re-integrated into one or more secondary audio bitstreams, decoded by a processor that supports the syntax of one or more of the secondary bitstreams, and then combined and presented together or independently. In addition, the roles of the first and second bitstreams can be reversed so that blocks of data of the first bitstream are multiplexed into the auxiliary data of the second bitstream.
与二次编码协议关联的比特流元素也携带和传送下层 (underlying)音频的信息(元数据)特性,其可以包括但不限于, 期望的声源位置、速度和尺寸。这个元数据在解码和呈现处理期间被 利用以便重新创建对于可应用的比特流内携带的关联音频对象的正 确的(即,初始的)位置。还可以在与第一编码协议关联的比特流元 素内携带上述的元数据,其可应用到包含在混合流中存在的一个或更 多个二次比特流中的音频对象。The bitstream elements associated with the secondary encoding protocol also carry and convey information (metadata) about the underlying audio characteristics, which may include, but are not limited to, expected sound source location, velocity, and size. This metadata is utilized during the decoding and rendering process to recreate the correct (i.e., original) positions for the associated audio objects carried within the applicable bitstream. The aforementioned metadata may also be carried within the bitstream elements associated with the primary encoding protocol and may apply to audio objects contained in one or more secondary bitstreams present in the hybrid stream.
与混合编码系统的第一和第二编码协议中的一个或两者关联的 比特流元素携带/传送语境元数据,其识别空间参数(即,信号特性本 身的本体)和描述具有在混合编码的音频比特流内携带的特定音频种 类形式的下层音频本体类型的另外信息。这种元数据可以指示例如存 在口头对话、音乐、在音乐之上的对话、掌声、歌声等,并且可以被 用来自适应修改混合编码系统的上游或下游的互连的预处理或后处 理模块的性质。Bitstream elements associated with one or both of the first and second coding protocols of the hybrid coding system carry/transmit contextual metadata that identifies spatial parameters (i.e., the essence of the signal characteristics themselves) and additional information describing the type of underlying audio essence in the form of specific audio types carried within the hybrid-coded audio bitstream. This metadata can indicate, for example, the presence of spoken dialogue, music, dialogue over music, applause, singing, etc., and can be used to adaptively modify the properties of interconnected pre-processing or post-processing modules upstream or downstream of the hybrid coding system.
在一个实施例中,编解码器108被配置为利用共享的或公共的比 特池(pool)来操作,在比特池中对于编码可用的比特在支持一个或 更多个协议的编码子系统的部分或全部之间被“共享”。这种编解码 器可以在编码子系统之间分发可用的比特(来自公共的“共享的”比 特池)以便优化统一的比特流的整体音频质量。例如,在第一时间间 隔期间,编解码器可以分配更多的可用比特给第一编码子系统,并且 分配更少的可用比特给剩余子系统,而在第二时间间隔期间,编解码 器可以分配更少的可用比特给第一编码子系统,并且分配更多的可用 比特给剩余子系统。如何在编码子系统之间分配比特的决定可以依赖 于例如共享的比特池的统计分析的结果和/或由每个子系统编码的音 频内容的分析。编解码器可以以使得通过多路复用编码子系统的输出 构造的统一的比特流在特定的时间间隔内维持恒定的帧长度/比特率 的方式来分配来自共享的池的比特。在一些情况下还可以在特定的时 间间隔内改变统一的比特流的帧长度/比特率。In one embodiment, codec 108 is configured to operate using a shared or public bit pool, in which bits available for encoding are "shared" between some or all of the coding subsystems supporting one or more protocols. This codec can distribute available bits (from a public "shared" bit pool) between the coding subsystems to optimize the overall audio quality of a unified bitstream. For example, during a first time interval, the codec can allocate more available bits to the first coding subsystem and allocate fewer available bits to the remaining subsystems, while during a second time interval, the codec can allocate fewer available bits to the first coding subsystem and allocate more available bits to the remaining subsystems. The decision on how to allocate bits between the coding subsystems can depend on, for example, the results of a statistical analysis of the shared bit pool and/or an analysis of the audio content encoded by each subsystem. The codec can allocate bits from the shared pool in such a way that the unified bitstream constructed by the output of the multiplexed coding subsystems maintains a constant frame length/bit rate within a specific time interval. In some cases it is also possible to change the frame length/bit rate of a unified bitstream within a specific time interval.
在可替代的实施例中,编解码器108产生统一的比特流,其包括 根据配置和发送作为编码后的数据流(支持第一编码协议的解码器将 对其解码)的独立子流的第一编码协议编码的数据、以及根据发送作 为编码后的数据流(支持第一协议的解码器将忽略其)的独立的或依 赖的子流的第二协议编码的数据。更一般地说,在一类实施例中,编 解码器产生统一的比特流,其包括两个或更多个独立的或依赖的子流 (其中每个子流包括根据不同的或相同的编码协议编码的数据)。In an alternative embodiment, codec 108 generates a unified bitstream that includes data encoded according to a first encoding protocol that is configured and sent as an independent substream of the encoded data stream (which a decoder supporting the first encoding protocol will decode), and data encoded according to a second protocol that is sent as an independent or dependent substream of the encoded data stream (which a decoder supporting the first protocol will ignore). More generally, in one class of embodiments, the codec generates a unified bitstream that includes two or more independent or dependent substreams (where each substream includes data encoded according to a different or the same encoding protocol).
在又一个可替代的实施例中,编解码器108产生统一的比特流, 其包括根据利用唯一的比特流标识符配置和发送的第一编码协议编 码的数据(支持与唯一的比特流标识符关联的第一编码协议的解码器 将对其解码)、以及根据利用唯一的比特流标识符配置和发送的第二 协议编码的数据(支持第一协议的解码器将忽略其)。更一般地说, 在一类实施例中,编解码器产生统一的比特流,其包括两个或更多个 子流(其中每个子流包括根据不同的或相同的编码协议编码的数据并 且其中每个携带唯一的比特流标识符)。用于创建上述的统一的比特 流的方法和系统提供清楚地(给解码器)发信号通知哪个交错(interleaving)和/或协议已经在混合比特流内被利用的能力(例如, 发信号通知是否利用描述的AUX数据、SKIP、DSE或子流方法)。In yet another alternative embodiment, the codec 108 generates a unified bitstream that includes data encoded according to a first encoding protocol configured and transmitted using a unique bitstream identifier (which a decoder supporting the first encoding protocol associated with the unique bitstream identifier will decode), and data encoded according to a second protocol configured and transmitted using the unique bitstream identifier (which a decoder supporting the first protocol will ignore). More generally, in one class of embodiments, the codec generates a unified bitstream that includes two or more substreams (where each substream includes data encoded according to a different or the same encoding protocol and where each carries a unique bitstream identifier). The methods and systems for creating the above-described unified bitstream provide the ability to clearly signal (to a decoder) which interleaving and/or protocol has been utilized within the mixed bitstream (e.g., to signal whether the described AUX data, SKIP, DSE, or substream methods are utilized).
混合编码系统被配置为支持在整个媒体递送系统期间发现的任 何处理点处对支持一个或更多个二次协议的比特流的解交错/解多路 复用和重新交错/重新多路复用到第一比特流(支持第一协议)中。混 合编解码器还被配置为能够将具有不同采样率的音频输入流编码到 一个比特流中。这提供用于有效地编码和分发包含具有固有地不同的 带宽的信号的音频源的手段。例如,与音乐和效果轨道相比,对话轨 道典型地具有固有地更低的带宽。The hybrid coding system is configured to support deinterleaving/demultiplexing and reinterleaving/remultiplexing of bitstreams supporting one or more secondary protocols into a primary bitstream (supporting the primary protocol) at any processing point found throughout the media delivery system. The hybrid codec is also configured to be able to encode audio input streams with different sampling rates into a single bitstream. This provides a means for efficiently encoding and distributing audio sources containing signals with inherently different bandwidths. For example, dialogue tracks typically have inherently lower bandwidth than music and effects tracks.
呈现Presentation
在实施例之下,自适应音频系统允许多个(例如,高达128个) 轨道被封装,通常作为基础和对象的结合。对于自适应音频系统的音 频数据的基本格式包括许多独立的单声道音频流。每个流具有与它关 联的元数据,其指定流是基于声道的流还是基于对象的流。基于声道 的流具有利用声道名字或标记编码的呈现信息;并且基于对象的流具 有通过在另外关联的元数据中编码的数学表达式编码的位置信息。原 始的独立的音频流然后被封装作为以有序的方式包含所有音频数据 的单个串行的比特流。这个自适应数据配置允许根据非自我中心的参 考系呈现声音,在其中声音的最终呈现位置基于回放环境以对应于混 合者的意图。因此,声音可以被指定为来源于回放房间的参考系(例 如,左壁的中间),而不是特定的标记的扬声器或扬声器组(例如, 左环绕)。对象位置元数据包含为在房间中使用可用扬声器位置正确 地播放声音所需的适当的非自我中心的参考系信息,该房间被设立来 播放自适应音频内容。Under an embodiment, an adaptive audio system allows multiple (e.g., up to 128) tracks to be packaged, typically as a combination of bases and objects. The basic format of audio data for an adaptive audio system includes a number of independent mono audio streams. Each stream has metadata associated with it that specifies whether the stream is a channel-based stream or an object-based stream. Channel-based streams have rendering information encoded using channel names or tags; and object-based streams have position information encoded by mathematical expressions encoded in additional associated metadata. The original independent audio streams are then packaged as a single serial bit stream containing all the audio data in an ordered manner. This adaptive data configuration allows sounds to be rendered according to an allocentric reference frame, in which the final rendering position of the sound is based on the playback environment to correspond to the mixer's intention. Thus, sounds can be specified as originating from the reference frame of the playback room (e.g., the middle of the left wall) rather than a specific labeled speaker or group of speakers (e.g., left surround). The object position metadata contains the appropriate allocentric reference frame information needed to correctly play the sound using the available speaker positions in the room set up to play the adaptive audio content.
呈现器采取对音频轨道编码的比特流,并且根据信号类型处理内 容。基础被供给阵列,其将可能要求与单独的对象不同的延迟和均衡 化处理。处理支持将这些基础和对象呈现给多个(高达64个)扬声 器输出。图4是按照一个实施例的自适应音频系统的呈现阶段的框图。 如图4的系统400所示,许多输入信号(诸如高达128个音频轨道, 其包括自适应音频信号402)被系统300的创建、创作和封装阶段的 特定组件(诸如RMU 306和处理器312)提供。这些信号包括被呈现 器404利用的基于声道的基础和对象。基于声道的音频(基础)和对 象被输入到水平管理器(level manager)406,其提供对不同的音频 成分的振幅或输出水平的控制。特定音频成分可以由阵列校正组件 408处理。自适应音频信号然后经过B链处理组件410,其产生多个 (例如,高达64个)扬声器供给输出信号。通常,B链供给指的是 由功率放大器、杂交(crossovers)和扬声器处理的信号,与构成电 影胶片上的音轨的A链内容相反。The renderer takes a bitstream encoding the audio track and processes the content based on the signal type. The basis is fed into an array, which may require different delay and equalization processing than individual objects. Processing supports rendering these basis and objects to multiple (up to 64) speaker outputs. Figure 4 is a block diagram of the rendering stage of an adaptive audio system according to one embodiment. As shown in system 400 of Figure 4, a number of input signals (such as up to 128 audio tracks, including adaptive audio signal 402) are provided by specific components of the creation, creation, and packaging stages of system 300 (such as RMU 306 and processor 312). These signals include channel-based basis and objects that are utilized by the renderer 404. The channel-based audio (basis) and objects are input to a level manager 406, which provides control over the amplitude or output level of different audio components. Specific audio components can be processed by an array correction component 408. The adaptive audio signal then passes through a B-chain processing component 410, which generates multiple (e.g., up to 64) speaker-fed output signals. Typically, the B-chain feed refers to the signal processed by power amplifiers, crossovers, and speakers, as opposed to the A-chain content that makes up the soundtrack on a film strip.
在一个实施例中,呈现器404运行呈现算法,其智能地尽全力使 用剧场中的环绕扬声器。通过改善环绕扬声器的功率处理和频率响 应,并且对于剧场中的每个输出声道或扬声器保持相同的监视参考水 平,在屏幕和环绕扬声器之间摇移的对象可以维持他们的声压水平并 且在重要地没有增大剧场中的整体声压水平的情况下具有更接近的 音色匹配。适当地指定的环绕扬声器的阵列将典型地具有足够净空 (headroom)以便再现在环绕7.1或5.1音轨内可用的最大动态范围 (即在参考水平之上20dB),然而不太可能单个环绕扬声器将具有 大的多路的屏幕扬声器的相同的净空。结果,将很可能存在位于环绕 场中的对象将要求大于使用单个环绕扬声器可得到的声压的声压的 情况。在这些情况下,呈现器将展开声音横过合适数量的扬声器以便 实现要求的声压水平。自适应音频系统改善环绕扬声器的质量和功率 处理以便提供呈现的真实性方面的改善。它通过使用允许每个环绕扬 声器实现改善的功率处理的可选的后部亚低音扬声器并且同时可能 地利用更小的扬声器箱(cabinets),来提供对于环绕扬声器的低音 管理的支持。它还允许增加比现行实践更接近于屏幕的侧面环绕扬声 器以便确保对象可以平滑地从屏幕转变到环绕。In one embodiment, the renderer 404 runs a rendering algorithm that intelligently makes the best possible use of the surround speakers in the theater. By improving the power handling and frequency response of the surround speakers, and maintaining the same monitoring reference level for each output channel or speaker in the theater, objects panning between the screen and the surround speakers can maintain their sound pressure levels and have a closer timbre match without significantly increasing the overall sound pressure level in the theater. An array of properly specified surround speakers will typically have enough headroom to reproduce the maximum dynamic range available in a surround 7.1 or 5.1 soundtrack (i.e., 20 dB above the reference level), however, it is unlikely that a single surround speaker will have the same headroom as a large, multi-way screen speaker. As a result, there will likely be situations where an object located in the surround field will require a sound pressure greater than that achievable using a single surround speaker. In these cases, the renderer will spread the sound across the appropriate number of speakers to achieve the required sound pressure level. The adaptive audio system improves the quality and power handling of the surround speakers to provide improvements in the realism of the rendering. It provides support for bass management of the surround speakers by using an optional rear subwoofer that allows each surround speaker to achieve improved power handling while potentially utilizing smaller cabinets. It also allows the addition of side surround speakers closer to the screen than is currently practiced to ensure that objects can transition smoothly from screen to surround.
通过与特定呈现处理一起使用指定音频对象的位置信息的元数 据,系统400为内容创建者提供综合的、灵活的方法以用于移动超出 现有的系统的约束。如先前所述当前的系统创建并且分发音频,其利 用对音频本体(回放的音频的部分)中传送的内容类型的有限认识被 固定到特别的扬声器位置。自适应音频系统100提供新的混合方法, 其包括对于扬声器位置特定的音频(左声道、右声道等)和面向对象 的音频元素两者的选项,面向对象的音频元素已经概括了可以包括但 不限于位置、尺寸和速度的空间信息。这个混合方法提供对于呈现中 的保真度(通过固定的扬声器位置提供)和灵活性(概括的音频对象) 平衡的办法。系统还通过内容创建者在内容创建时提供与音频本体配 套的关于音频内容的额外的有用信息。这个信息提供可在呈现期间以 非常有力的方式使用的关于音频的属性的有力的详细信息。这种属性 可以包括但不限于,内容类型(对话、音乐、效果、福雷录音、背景 /环境等)、空间属性(3D位置、3D尺寸、速度)、以及呈现信息(快 移到扬声器位置、声道权重、增益、低音管理信息等)。By using metadata that specifies the positional information of audio objects in conjunction with specific rendering processes, system 400 provides content creators with a comprehensive, flexible approach for moving beyond the constraints of existing systems. As previously described, current systems create and distribute audio that is fixed to specific speaker locations using limited knowledge of the type of content being delivered in the audio body (the portion of the audio being played back). The adaptive audio system 100 offers a new hybrid approach that includes options for both speaker-location-specific audio (left channel, right channel, etc.) and object-oriented audio elements that already generalize spatial information, including but not limited to position, size, and velocity. This hybrid approach provides a balanced approach for fidelity (provided by fixed speaker positions) and flexibility (generalized audio objects) in the rendering. The system also provides content creators with additional useful information about the audio content, paired with the audio body, during content creation. This information provides powerful, detailed information about the audio's properties that can be used in a very powerful way during rendering. Such attributes may include, but are not limited to, content type (dialogue, music, effects, Foley recording, background/ambience, etc.), spatial attributes (3D position, 3D size, speed), and presentation information (snap to speaker positions, channel weighting, gain, bass management information, etc.).
在本申请中描述的自适应音频系统提供可以被广泛变化的数量 的端点用于呈现的有力的信息。在很多情况下应用的最佳的呈现技术 在很大程度上取决于端点装置。例如,家庭影院系统和声吧可以具有 2、3、5、7或甚至9个分离的扬声器。许多其它类型的系统(诸如电 视机、计算机和音乐坞)仅仅具有两个扬声器,并且几乎所有的通常 使用的装置具有两耳的头戴耳机输出(PC、膝上型计算机、平板、蜂 窝电话、音乐播放器等)。然而,对于当今分发的传统的音频(单声 道、立体声、5.1、7.1声道),端点装置经常需要作出简单化的决定 并且折衷以便呈现和再现现在以声道/扬声器特定的形式分发的音频。 另外有一点或没有传送的关于正在分发的实际内容的信息(对话、音 乐、环境等)并且有一点或没有关于内容创建者的对于音频再现的意 图的信息。然而,自适应音频系统100提供这个信息并且可能地访问 音频对象,其可以被用来创建强制性的(compelling)下一代用户体 验。The adaptive audio system described in this application provides powerful information that can be used for presentation by a widely varying number of endpoints. In many cases, the optimal presentation technique to be applied depends largely on the endpoint device. For example, home theater systems and sound bars may have 2, 3, 5, 7, or even 9 separate speakers. Many other types of systems (such as televisions, computers, and music docking stations) have only two speakers, and almost all commonly used devices have headphone outputs for both ears (PCs, laptops, tablets, cell phones, music players, etc.). However, with the traditional audio distributed today (mono, stereo, 5.1, 7.1 channels), endpoint devices often need to make simplistic decisions and compromises in order to present and reproduce the audio that is now distributed in a channel/speaker-specific format. In addition, there is little or no information transmitted about the actual content being distributed (dialogue, music, ambiance, etc.) and little or no information about the content creator's intent for the audio reproduction. However, the adaptive audio system 100 provides this information and potentially access to audio objects that can be used to create a compelling next generation user experience.
系统100允许内容创建者使用元数据(诸如位置、尺寸、速度等 等)通过唯一的并且强大的元数据和自适应音频传输格式在比特流内 嵌入混合的空间意图。这允许在音频的空间再现方面有大量灵活性。 从空间呈现观点看,自适应音频使得能够使混合适应于特别的房间中 的扬声器的精确位置以免当回放系统的几何形状与创作系统不相同 时出现的空间失真。在其中仅仅发送对于扬声器声道的音频的当前音 频再现系统中,内容创建者的意图是未知的。系统100使用在整个创 建和分发流水线期间传送的元数据。意识到自适应音频的再现系统可 以使用这个元数据信息来以匹配内容创建者的初始意图的方式再现内容。同样地,混合可以适应于再现系统的精确的硬件配置。目前, 在呈现设备(诸如电视机、家庭影院、声吧(soundbars)、便携式 音乐播放器坞(docks)等)中存在许多不同的可能的扬声器配置和 类型。当这些系统被发送有现今的声道特定的音频信息(即左和右声 道音频或多声道的音频)时,系统必须处理音频来适当地匹配呈现设 备的能力。一个示例是标准的立体声音频被发送给具有多于两个扬声 器的声吧。在其中仅仅发送对于扬声器声道的音频的当前音频再现 中,内容创建者的意图是未知的。通过使用在整个创建和分发流水线 期间传送的元数据,意识到自适应音频的再现系统可以使用这个信息 来以匹配内容创建者的初始意图的方式再现内容。例如,某些声吧具 有侧面激发(firing)扬声器来创建包围的感觉。利用自适应音频, 空间信息和内容类型(诸如环境效果)可以由声吧使用来只发送合适 的音频到这些侧面激发扬声器。System 100 allows content creators to embed the spatial intent of a mix within the bitstream using metadata (such as position, size, speed, etc.) through a unique and powerful metadata and adaptive audio transmission format. This allows for a great deal of flexibility in the spatial reproduction of audio. From a spatial rendering perspective, adaptive audio enables the mix to be adapted to the precise location of the speakers in a particular room to avoid spatial distortions that occur when the geometry of the playback system is different from the authoring system. In current audio reproduction systems, where only the audio for the speaker channels is transmitted, the content creator's intent is unknown. System 100 uses metadata that is transmitted throughout the creation and distribution pipeline. Reproduction systems aware of adaptive audio can use this metadata information to reproduce the content in a manner that matches the content creator's original intent. Similarly, the mix can be adapted to the precise hardware configuration of the reproduction system. Currently, there are many different possible speaker configurations and types in rendering devices such as televisions, home theaters, soundbars, portable music player docks, etc. When these systems are sent with today's channel-specific audio information (i.e., left and right channel audio or multi-channel audio), the systems must process the audio to appropriately match the capabilities of the rendering device. An example is standard stereo audio being sent to a sound bar with more than two speakers. In current audio reproduction, where only the audio for the speaker channels is sent, the intent of the content creator is unknown. By using metadata transmitted throughout the creation and distribution pipeline, an adaptive audio-aware reproduction system can use this information to reproduce the content in a manner that matches the original intent of the content creator. For example, some sound bars have side-firing speakers to create a sense of envelopment. With adaptive audio, spatial information and content type (such as ambient effects) can be used by the sound bar to send only the appropriate audio to these side-firing speakers.
自适应音频系统允许在系统中在前/后、左/右、上/下、近/远的 全部尺度上无限内插扬声器。在当前的音频再现系统中,不存在关于 如何处理其中可以期望定位音频使得它被收听者感知为在两个扬声 器之间的音频的信息。目前,在仅仅分配给特定的扬声器的音频的情 况下,空间量子化因素被引入。利用自适应音频,音频的空间定位可 以被准确地知道并且相应地在音频再现系统上再现。Adaptive audio systems allow for unlimited interpolation of speakers across all scales within the system: front/back, left/right, up/down, near/far. Current audio reproduction systems lack information on how to handle audio that is expected to be positioned so that the listener perceives it as being between two speakers. Currently, spatial quantization factors are introduced to assign audio only to specific speakers. With adaptive audio, the spatial positioning of audio can be accurately known and reproduced accordingly on the audio reproduction system.
对于头戴耳机呈现,创建者的意图通过匹配头相关传递函数 (Head RelatedTransfer Functions,HRTF)到空间位置来被实现。 当在头戴耳机之上再现音频时,空间虚拟化可以通过应用处理音频的 头相关传递函数、添加创建在三维空间中而不在头戴耳机之上播放的 音频的感知的感知提示(cues)来实现。空间再现的精度取决于合适 的HRTF的选择,HRTF可以基于包括空间位置在内的若干因素而改 变。使用由自适应音频系统提供的空间信息可以使得选择一个或持续 改变数量的HRTF以便极大地改善再现体验。For headphone rendering, the creator's intent is realized by matching head-related transfer functions (HRTFs) to spatial locations. When the audio is reproduced over headphones, spatial virtualization can be achieved by applying HRTFs to the processed audio, adding perceptual cues that create the perception of the audio being in three-dimensional space without being played over headphones. The accuracy of the spatial reproduction depends on the selection of appropriate HRTFs, which can vary based on several factors, including spatial location. Using the spatial information provided by the adaptive audio system can enable the selection of one or a continuously changing number of HRTFs to greatly improve the reproduction experience.
自适应音频系统传送的空间信息可以不仅由内容创建者使用来 创建强制性的娱乐体验(电影、电视、音乐等),而且空间信息也可 以指示收听者相对于物理对象(诸如建筑物或地理的感兴趣点)的位 置。这将允许用户和与真实世界有关的虚拟化的音频体验相互作用 即,增大真实性。The spatial information delivered by adaptive audio systems can not only be used by content creators to create compelling entertainment experiences (movies, TV, music, etc.), but the spatial information can also indicate the listener's position relative to physical objects (such as buildings or geographical points of interest). This allows users to interact with the virtualized audio experience in relation to the real world, i.e., increasing realism.
实施例还使得能够通过利用只有当对象音频数据不可用时才读 取元数据来执行增强的上混来进行空间上混。知道所有对象的位置和 他们的类型允许上混器更好区别基于声道的轨道内的元素。现有的上 混算法必须推断诸如音频内容类型(讲话、音乐、环境效果)之类的 信息以及音频流内的不同元素的位置以便创建具有最小或没有可听 到的伪迹的高质量上混。常常推断的信息可能是不正确的或不适当 的。在自适应音频的情况下,可从与例如音频内容类型、空间位置、 速度、音频对象尺寸等有关的元数据中获得的附加信息可以由上混算 法使用来创建高质量再现结果。该系统还通过准确地定位屏幕的音频 对象到视觉元素来空间地将音频匹配到视频。在该情况下,如果某些 音频元素的再现的空间位置匹配屏幕上的图象元素,则强制性的音频 /视频再现体验是可能的,特别地在更大屏幕尺寸的情况下。一个示例 是在电影或电视节目中具有对话与正在屏幕上说话的人或角色在空 间上一致。通常的基于扬声器声道的音频的情况下,不存在容易的方 法来确定对话应该被空间地定位在哪里以便匹配屏幕上的角色或人 的位置。利用自适应音频可用的音频信息,这种音频/视觉对准可以被 实现。视觉位置和音频空间对准也可以被用于非角色/对话对象(诸如 汽车、卡车、动画、等等)。Embodiments also enable spatial upmixing by performing enhanced upmixing by reading metadata only when object audio data is unavailable. Knowing the location of all objects and their types allows the upmixer to better distinguish elements within channel-based tracks. Existing upmixing algorithms must infer information such as the audio content type (speech, music, ambient effects) and the positions of different elements within the audio stream in order to create a high-quality upmix with minimal or no audible artifacts. Often, the inferred information may be incorrect or inappropriate. In the case of adaptive audio, additional information available from metadata related to, for example, audio content type, spatial position, velocity, audio object size, etc., can be used by the upmixing algorithm to create a high-quality reproduction result. The system also spatially matches audio to video by accurately positioning audio objects on screen to visual elements. In this case, if the spatial position of the reproduction of certain audio elements matches the image elements on the screen, a compelling audio/video reproduction experience is possible, particularly with larger screen sizes. An example is having the dialogue in a movie or television program spatially coincide with the person or character speaking on screen. With conventional speaker-channel-based audio, there's no easy way to determine where the dialogue should be spatially positioned to match the position of the characters or people on screen. Leveraging the audio information available with adaptive audio, this audio/visual alignment can be achieved. Visual position and audio spatial alignment can also be used for non-character/dialogue objects (such as cars, trucks, animations, etc.).
空间掩蔽处理被系统100促进,因为通过自适应音频元数据对混 合的空间意图的认识意味着混合可以适应于任何扬声器配置。然而, 由于回放系统限制,在相同的或几乎相同的位置中下混对象存在风 险。例如,如果环绕声道不存在,打算在左后部中摇移的对象可能被 下混到左前方,但是如果同时在左前方中出现更大声的元素,则下混 的对象将被掩蔽并且从混合中消失。使用自适应音频元数据,空间掩 蔽可以由呈现器预期,并且每个对象的空间和或响度下混参数可以被 调节使得混合的全部音频元素保持正如原始的混合中可感知的一样。 由于呈现器明白混合和回放系统之间的空间关系,因此它具有“快移”对象到最接近扬声器的能力而不是在两个或更多个扬声器之间创建 幻像(phantomimage)。虽然这可能使混合的空间表示稍微失真, 但是它也允许呈现器避免非故意的幻像。例如,如果混合阶段的左扬 声器的角位置不对应于回放系统的左扬声器的角位置,则使用快移到 最接近扬声器的功能可以避免回放系统再现混合阶段的左声道的恒 定幻像。Spatial masking is facilitated by system 100 because the knowledge of the spatial intent of the mix through adaptive audio metadata means that the mix can be adapted to any speaker configuration. However, there is a risk of downmixing objects in the same or nearly the same position due to playback system limitations. For example, if surround channels are not present, an object intended to pan in the left rear might be downmixed to the left front, but if a louder element simultaneously appears in the left front, the downmixed object will be masked and disappear from the mix. Using adaptive audio metadata, spatial masking can be anticipated by the renderer, and the spatial and/or loudness downmix parameters of each object can be adjusted so that all audio elements of the mix remain as perceivable in the original mix. Because the renderer understands the spatial relationship between the mix and the playback system, it has the ability to "snap" objects to the nearest speaker rather than creating phantom images between two or more speakers. While this may slightly distort the spatial representation of the mix, it also allows the renderer to avoid unintentional phantoms. For example, if the angular position of the left loudspeaker of the mixing stage does not correspond to the angular position of the left loudspeaker of the playback system, using the function of fast panning to the nearest loudspeaker can avoid the playback system reproducing a constant phantom image of the left channel of the mixing stage.
对于内容处理,自适应音频系统100允许内容创建者创建单独的 音频对象和添加关于可以被传送到再现系统的内容的信息。这允许在 再现之前的音频处理中有大量灵活性。从内容处理和呈现观点看,自 适应音频系统使得处理能够适应于对象类型。例如,对话增强可以被 仅仅应用于对话对象。对话增强指的是处理包含对话的音频使得对话 的能听度和/或可懂度被增大和或改善的方法。在很多情况下被应用于 对话的音频处理是对于非对话音频内容(即音乐、环境效果等)不适 当的并且可以导致令人不愉快的可听到的假象。利用自适应音频,音 频对象可以在一块内容中仅仅包含对话,并且它可以被相应地标记使得呈现解决方案可以选择性地将对话增强仅仅应用于对话内容。另 外,如果音频对象仅仅是对话(并且不是经常情况的对话和其它内容 的混合),则对话增强处理可以专门地处理对话(由此限制对任何其 它内容执行的任何处理)。同样地,低音管理(滤波、衰减、增益)可以基于他们的类型指向特定的对象。低音管理指的是在特别的一块 内容中选择性地隔离和仅仅处理低音(或更低)频率。在当前的音频 系统和传送机构的情况下,这是被应用于所有音频的“盲(blind)” 处理。利用自适应音频,适合进行低音管理的特定的音频对象可以通 过元数据被识别,并且可以适当地应用呈现处理。Regarding content processing, the adaptive audio system 100 allows content creators to create individual audio objects and add information about the content that can be transmitted to the reproduction system. This allows for a great deal of flexibility in the audio processing prior to reproduction. From a content processing and rendering perspective, the adaptive audio system enables processing to be tailored to the object type. For example, dialogue enhancement can be applied only to dialogue objects. Dialogue enhancement refers to methods that process audio containing dialogue so that the audibility and/or intelligibility of the dialogue is increased and/or improved. In many cases, audio processing applied to dialogue is inappropriate for non-dialogue audio content (i.e., music, ambient effects, etc.) and can result in unpleasant audible artifacts. With adaptive audio, an audio object can contain only dialogue within a piece of content and can be marked accordingly, allowing the rendering solution to selectively apply dialogue enhancement to only the dialogue content. Alternatively, if the audio object is solely dialogue (and not a mix of dialogue and other content, as is often the case), dialogue enhancement processing can be applied exclusively to dialogue (thus limiting any processing performed on any other content). Similarly, bass management (filtering, attenuation, gain) can be targeted to specific objects based on their type. Bass management refers to selectively isolating and processing only the bass (or lower) frequencies in a particular piece of content. With current audio systems and delivery mechanisms, this is a "blind" process applied to all audio. With adaptive audio, specific audio objects suitable for bass management can be identified through metadata, and rendering processing can be applied appropriately.
自适应音频系统100也提供基于对象的动态范围压缩和选择性 的上混。传统的音频轨道具有与内容本身相同的持续时间,但是音频 对象可能仅仅在内容中出现有限量的时间。与对象关联的元数据可以 包含关于它的平均值和峰值信号振幅的信息以及它的发动(onset)或 冲击时间(特别地对于瞬时的材料)。这个信息将允许压缩器更好修 改它的压缩和时间常数(冲击、释放等)以便更好适应内容。对于选 择性的上混,内容创建者可能选择在自适应音频比特流中指示对象是 否应该被上混。这个信息允许自适应音频呈现器和上混器在考虑创建 者的意图的同时区分哪些音频元素可以被安全地上混。The adaptive audio system 100 also provides object-based dynamic range compression and selective upmixing. Traditional audio tracks have the same duration as the content itself, but audio objects may only appear in the content for a limited amount of time. Metadata associated with an object can contain information about its average and peak signal amplitudes, as well as its onset or attack time (particularly for transient material). This information will allow the compressor to better modify its compression and time constants (attack, release, etc.) to better adapt to the content. For selective upmixing, the content creator may choose to indicate in the adaptive audio bitstream whether an object should be upmixed. This information allows the adaptive audio renderer and upmixer to distinguish which audio elements can be safely upmixed while taking the creator's intent into account.
实施例还允许自适应音频系统从许多可用的呈现算法和/或环绕 声格式中选择偏爱的呈现算法。可用的呈现算法的示例包括:两路立 体声、立体声偶极、立体混响声、波场合成(WFS)、多声道摇移 (panning)、具有位置元数据的原始主干。其它包括双平衡和基于 矢量的振幅摇移。Embodiments also allow the adaptive audio system to select a preferred rendering algorithm from among many available rendering algorithms and/or surround sound formats. Examples of available rendering algorithms include: binaural, stereo dipole, ambisonics, wave field synthesis (WFS), multichannel panning, and raw stems with positional metadata. Others include dual balancing and vector-based amplitude panning.
两路立体声的分发格式使用依据左右耳处出现的信号的声场的 双声道的表示。两路立体声的信息可以经由耳朵内记录被创建或使用 HRTF模式被合成。两路立体声的表示的回放典型地在头戴耳机之上 进行,或者通过采用串扰消除进行。在任意的扬声器设立之上回放将 要求信号分析以便确定关联的声场和/或一个或多个信号源。The binaural distribution format uses a two-channel representation of the sound field based on the signals present at the left and right ears. The binaural information can be created via in-ear recording or synthesized using HRTF patterns. Playback of the binaural representation is typically performed over headphones or by employing crosstalk cancellation. Playback over any speaker setup requires signal analysis to determine the associated sound field and/or one or more signal sources.
立体声偶极呈现方法是跨声道(transaural)串扰消除处理以便 制造可在立体声扬声器(例如,以+和-10度偏心)之上播放的两耳的 信号。The stereo dipole rendering method is a transaural crosstalk cancellation process to create a binaural signal that can be played over stereo speakers (e.g., at + and -10 degrees eccentricity).
立体混响声是以称为B格式的四声道的形式被编码的(分发格 式和呈现方法)。第一声道W是不定向的压力信号;第二声道X是 包含前方和后部信息的定向压力梯度;第三声道Y包含左和右并且Z 包含上和下。这些声道定义整个声场在一点处的一阶样本。立体混响声使用所有可用的扬声器来在扬声器阵列内再创建采样的(或者合成 的)声场,使得当某些扬声器正在推(pushing)时其它正在拉 (pulling)。Ambisonics is encoded in a four-channel format called B-format (distribution format and rendering method). The first channel, W, is a non-directional pressure signal; the second channel, X, is a directional pressure gradient containing front and rear information; the third channel, Y, contains left and right, and Z contains top and bottom. These channels define a first-order sample of the entire sound field at a point. Ambisonics uses all available speakers to recreate the sampled (or synthesized) sound field within the speaker array, so that while some speakers are pushing, others are pulling.
波场合成是基于通过二次源精确的构造期望的波场的声音再现 的呈现方法。WFS基于惠更斯原理,并且被实现为扬声器阵列(几十 或者几百),其环绕收听空间并且以协同的定相的方式操作以便重新 创建每个单独的声波。Wave Field Synthesis (WFS) is a rendering method for sound reproduction based on the precise construction of a desired wave field through secondary sources. WFS is based on the Huygens principle and is implemented as an array of speakers (tens or hundreds) that surround the listening space and operate in a coordinated, phased manner to recreate each individual sound wave.
多声道摇移是分发格式和/或呈现方法,并且可以被称为基于声 道的音频。在该情况下,声音被表示为要通过相等数的扬声器以从收 听者定义的角度被回放的许多离散源。内容创建者/混合者可以通过在 相邻声道之间摇移信号来创建虚像以便提供方向提示;早期反射、混 响等可以被混合到许多声道中以便提供方向和环境提示。Multichannel panning is a distribution format and/or rendering method that can be referred to as channel-based audio. In this case, sound is represented as many discrete sources to be played back through an equal number of speakers at a listener-defined angle. Content creators/mixers can create virtual images by panning the signal between adjacent channels to provide directional cues; early reflections, reverberation, etc. can be mixed into the multiple channels to provide directional and ambient cues.
具有位置元数据的原始主干是分发格式,并且也可以被称为基于 对象的音频。在这个格式中,不同的“接近话筒的(close mic'ed)” 声源与位置和环境元数据一起被表示。虚拟源基于元数据和回放设备 和收听环境被呈现。Original Trunk with Positional Metadata is a distribution format also known as object-based audio. In this format, different "closely mic'ed" sound sources are represented along with positional and ambient metadata. Virtual sources are rendered based on the metadata and the playback device and listening environment.
自适应音频格式是多声道摇移格式和原始主干格式的混合。本实 施例中的呈现方法是多声道摇移。对于音频声道,呈现(摇移)在创 作时间处发生,但是对于对象呈现(摇移)在回放处发生。The adaptive audio format is a hybrid of the multi-channel panning format and the original backbone format. The rendering method in this embodiment is multi-channel panning. For audio channels, rendering (panning) occurs at authoring time, but for objects, rendering (panning) occurs at playback time.
元数据和自适应音频传输格式Metadata and Adaptive Audio Transport Format
如上所述,元数据在创建阶段期间被产生以便对于音频对象对特 定位置信息编码和伴随音频节目来帮助呈现音频节目,并且特别地, 以使得能够在各式各样的回放设备和回放环境上呈现音频节目的方 式描述音频节目。针对给定节目以及在后制作期间创建、收集、编辑 和操纵音频的编辑者和混合者产生元数据。自适应音频格式的重要特 征是控制音频将如何译为不同于混合环境的回放系统和环境的能力。 特别地,给定电影可以具有比混合环境更少的能力。As mentioned above, metadata is generated during the creation phase to encode specific location information for audio objects and accompany the audio program to aid in its presentation. Specifically, metadata describes the audio program in a way that enables its presentation on a wide variety of playback devices and environments. Metadata is generated for a given program and by editors and mixers who create, collect, edit, and manipulate the audio during post-production. A key feature of adaptive audio formats is the ability to control how the audio will translate to playback systems and environments that differ from the mixing environment. In particular, a given movie may have fewer capabilities than the mixing environment.
自适应音频呈现器被设计成充分利用可用的设备来重新创建混 合者的意图。此外,自适应音频创作工具允许混合者预览和调节混合 将如何在各种回放配置上被呈现。所有元数据值可以在回放环境和扬 声器配置上被调节(condition)。例如,可以基于回放配置或者模式 指定对于给定音频元素的不同的混合水平。在一个实施例中,调节的 回放模式的列表是可扩展的并且包括以下:(1)仅仅基于声道的回 放:5.1、7.1、7.1(高度)、9.1;以及(2)离散扬声器回放:3D、 2D(没有高度)。The adaptive audio renderer is designed to take full advantage of the available equipment to recreate the mixer's intent. In addition, the adaptive audio authoring tool allows the mixer to preview and adjust how the mix will be rendered on various playback configurations. All metadata values can be conditioned on the playback environment and speaker configuration. For example, different mixing levels for a given audio element can be specified based on the playback configuration or mode. In one embodiment, the list of conditioned playback modes is extensible and includes the following: (1) channel-based playback only: 5.1, 7.1, 7.1 (height), 9.1; and (2) discrete speaker playback: 3D, 2D (no height).
在一个实施例中,元数据控制或者规定自适应音频内容的不同的 方面并且基于不同类型被组织,该类型包括:节目元数据、音频元数 据以及呈现元数据(对于声道以及对象)。每个类型的元数据包括一 个或更多个元数据项目,其提供对于由标识符(ID)提及的特性的值。 图5是按照一个实施例的列出对于自适应音频系统的元数据类型和关 联的元数据元素的表格。In one embodiment, metadata controls or specifies different aspects of adaptive audio content and is organized into different types: program metadata, audio metadata, and presentation metadata (for channels and objects). Each type of metadata includes one or more metadata items that provide values for characteristics referenced by an identifier (ID). Figure 5 is a table listing metadata types and associated metadata elements for an adaptive audio system, according to one embodiment.
如图5的表格500所示,第一类型元数据是节目元数据,其包括 指定帧率、轨道数、可扩展的声道描述和混合阶段描述的元数据元素。 帧率元数据元素指定以每秒帧(fps)为单位的音频内容帧的速率。原 始的音频格式不必包括音频或者元数据的组帧(framing),因为音 频被提供为全轨道(一盘(reel)或者整个特征的持续时间)而不是 音频片段(对象的持续时间)。原始的格式的确需要携带使得自适应 音频编码器能够对音频和元数据进行组帧所需的所有信息,包括实际 帧率。表1示出了帧率元数据元素的ID、示例值和描述。As shown in Table 500 of Figure 5, the first type of metadata is program metadata, which includes metadata elements that specify the frame rate, number of tracks, scalable channel descriptions, and mixing stage descriptions. The frame rate metadata element specifies the rate of audio content frames in frames per second (fps). The original audio format does not necessarily include framing of the audio or metadata because audio is provided as full tracks (the duration of a reel or entire feature) rather than audio segments (the duration of an object). The original format does need to carry all the information required to enable the adaptive audio encoder to frame the audio and metadata, including the actual frame rate. Table 1 shows the ID, example values, and description of the frame rate metadata element.
表1Table 1
轨道数元数据元素指示帧中的音频轨道的数量。示例的自适应音 频解码器/处理器可以支持高达128个同时的音频轨道,但是自适应音 频格式将支持任意数目的音频轨道。表2示出了轨道数元数据元素的 ID、示例值和描述。The number-of-tracks metadata element indicates the number of audio tracks in a frame. The example adaptive audio decoder/processor can support up to 128 simultaneous audio tracks, but the adaptive audio format will support any number of audio tracks. Table 2 shows the ID, example value, and description of the number-of-tracks metadata element.
表2Table 2
基于声道的音频可以被分配给非标准声道,并且可扩展的声道描 述元数据元素使得混合能够使用新的声道位置。对于每个扩展声道以 下元数据应该被提供,如表3所示:Channel-based audio can be assigned to non-standard channels, and the extensible channel description metadata element enables mixing to use the new channel positions. For each extended channel the following metadata should be provided, as shown in Table 3:
表3Table 3
混合阶段描述元数据元素指定在其处特别的扬声器产生通带的 一半功率的频率。表格4示出了混合阶段描述元数据元素的ID、示例 值和描述,其中LF=低频;HF=高频;3dB点=扬声器通带的边缘。The Mixing Stage Description metadata element specifies the frequency at which a particular loudspeaker produces half the power of its passband. Table 4 shows the ID, example values, and description of the Mixing Stage Description metadata element, where LF = low frequency; HF = high frequency; and 3dB point = edge of the loudspeaker passband.
表4Table 4
如图5所示,第二类型元数据是音频元数据。每个基于声道的或 者基于对象的音频元素由音频本体和元数据组成。音频本体是在许多 音频轨道之一上携带的单声道音频流。关联元数据描述音频本体如何 被存储(音频元数据,例如,采样率)或者它应该如何被呈现(呈现 元数据,例如,期望的音频源位置)。通常,音频轨道在音频节目的 持续期间是连续的。节目编辑者或者混合者对分配音频元素给轨道负 责。预期轨道使用是稀疏的,即中值的同时轨道使用可以仅仅是16 到32。在典型的实现方式中,音频将通过使用无损的编码器被有效地 发送。然而,可替代的实现方式是可能的,例如发送未编码的音频数 据或者有损编码的音频数据。在典型的实现方式中,格式由高达128 个音频轨道组成,其中每个轨道具有单个样本速率和单个编码系统。 每个轨道持续特征的持续时间(没有明确的卷(reel)支持)。对象 到轨道的映射(时分复用)是内容创建者(混合者)的责任。As shown in Figure 5, the second type of metadata is audio metadata. Each channel-based or object-based audio element consists of an audio body and metadata. The audio body is a mono audio stream carried on one of many audio tracks. The associated metadata describes how the audio body is stored (audio metadata, such as sampling rate) or how it should be presented (presentation metadata, such as the expected audio source location). Typically, audio tracks are continuous for the duration of the audio program. The program editor or mixer is responsible for assigning audio elements to tracks. Track usage is expected to be sparse, that is, the median simultaneous track usage may be only 16 to 32. In a typical implementation, audio will be efficiently sent using a lossless encoder. However, alternative implementations are possible, such as sending unencoded audio data or lossy encoded audio data. In a typical implementation, the format consists of up to 128 audio tracks, each with a single sample rate and a single encoding system. Each track lasts for a characteristic duration (without explicit reel support). The mapping of objects to tracks (time division multiplexing) is the responsibility of the content creator (mixer).
如图5所示,音频元数据包括采样率、比特深度、和编码系统的 元素。表5示出了采样率元数据元素的ID、示例值和描述。As shown in Figure 5, audio metadata includes elements of sampling rate, bit depth, and coding system. Table 5 shows the ID, example value, and description of the sampling rate metadata element.
表5Table 5
表6示出了比特深度元数据元素的ID、示例值和描述(对于PCM 和无损压缩)。Table 6 shows the ID, example values, and description of the bit depth metadata element (for PCM and lossless compression).
表6Table 6
表7示出了编码系统元数据元素的ID、示例值和描述。Table 7 shows the ID, example values, and description of the encoding system metadata element.
表7Table 7
如图5所示,第三类型元数据是呈现元数据。呈现元数据指定帮 助呈现器与回放环境无关地尽可能接近地匹配原始的混合者意图的 值。该组元数据元素对于基于声道的音频和基于对象的音频是不同 的。第一呈现元数据字段在基于音频声道的或者基于对象的两个类型 之间进行选择,如表8所示。As shown in Figure 5, the third type of metadata is presentation metadata. Presentation metadata specifies values that help the renderer match the original mixer's intent as closely as possible, regardless of the playback environment. This set of metadata elements is different for channel-based audio and object-based audio. The first presentation metadata field selects between audio channel-based or object-based, as shown in Table 8.
表8Table 8
对于基于声道的音频的呈现元数据包含位置元数据元素,其指定 作为一个或更多个扬声器位置的音频源位置。表9示出了对于基于声 道的情况的对于位置元数据元素的ID和值。The presentation metadata for channel-based audio contains a position metadata element that specifies the audio source position as one or more speaker positions. Table 9 shows the ID and value for the position metadata element for the channel-based case.
表9Table 9
对于基于声道的音频的呈现元数据还包含呈现控制元素,其指定 关于基于声道的音频的回放的特定特性,如表10所示。The presentation metadata for channel-based audio also contains presentation control elements, which specify certain characteristics about the playback of the channel-based audio, as shown in Table 10.
表10Table 10
对于基于对象的音频,元数据包括与基于声道的音频类似的元 素。表11提供对于对象位置元数据元素的ID和值。对象位置以三种 方式之一被描述:三维坐标;面和二维坐标;或者线和一维坐标。呈 现方法可以基于位置信息类型修改。For object-based audio, metadata includes similar elements as for channel-based audio. Table 11 provides the IDs and values for the object position metadata element. Object positions are described in one of three ways: 3D coordinates; a plane and 2D coordinates; or a line and 1D coordinates. The presentation method can be modified based on the type of position information.
表11Table 11
对于对象呈现控制元数据元素的ID和值被示出在表12中。这 些值提供用于控制或者优化对于基于对象的音频的呈现的额外的手 段。The IDs and values for the object rendering control metadata elements are shown in Table 12. These values provide additional means for controlling or optimizing the rendering of object-based audio.
表12Table 12
在一个实施例中,上述和图5中示出的元数据被产生和存储为一 个或更多个文件,其与对应音频内容关联或索引(indexed),使得音 频流由解释混合者产生的元数据的自适应音频系统处理。应当注意, 上述的元数据是示例性的一组ID、值和定义,并且其它或额外的元数 据元素可以被包括以供自适应音频系统之用。In one embodiment, the metadata described above and shown in FIG5 is generated and stored as one or more files that are associated or indexed with the corresponding audio content, allowing the audio stream to be processed by an adaptive audio system that interprets the metadata generated by the mixer. It should be noted that the metadata described above is an exemplary set of IDs, values, and definitions, and other or additional metadata elements may be included for use by the adaptive audio system.
在一个实施例中,两个(或更多)组的元数据元素与基于对象的 音频流和声道中的每一个关联。对于回放环境的第一条件,第一组元 数据被应用于多个音频流,并且对于回放环境的第二条件,第二组元 数据被应用于多个音频流。对于给定音频流,基于回放环境的条件将 第二或者后续的组的元数据元素代替第一组元数据元素。该条件可以 包括因素,诸如房间尺寸、形状、房间内的材料成分、房间内的人密 度和当前占用率、环境噪声特性、环境光特性、以及可以影响声音或 者甚至回放环境的气氛的任何其它因素。In one embodiment, two (or more) sets of metadata elements are associated with each of the object-based audio streams and channels. For a first condition of the playback environment, a first set of metadata is applied to the multiple audio streams, and for a second condition of the playback environment, a second set of metadata is applied to the multiple audio streams. For a given audio stream, the first set of metadata elements is replaced by a second or subsequent set of metadata elements based on the condition of the playback environment. The condition may include factors such as room size, shape, material composition within the room, room occupancy and current occupancy, ambient noise characteristics, ambient light characteristics, and any other factors that may affect the sound or even the atmosphere of the playback environment.
后制作和主控Post-production and mastering
自适应音频处理系统100的呈现阶段110可以包括音频后制作步 骤,其引导创建最后的混合。在电影应用中,电影混合中使用的三个 主要种类的声音是对话、音乐和效果。效果由不是对话或者音乐的声 音(例如,环境噪声、背景/场景噪声)组成。声音效果可以由声音设 计者记录或者合成,或者它们可以来源于效果库。包括特定的噪声源 (例如,脚步声、门等)的子群效果被称为福雷录音(Foley)和由福 雷录音者执行。不同类型的声音由记录工程师相应地标记和摇移。The rendering stage 110 of the adaptive audio processing system 100 may include an audio post-production step that guides the creation of the final mix. In film applications, the three main types of sounds used in a film mix are dialogue, music, and effects. Effects consist of sounds that are not dialogue or music (e.g., ambient noise, background/scene noise). Sound effects can be recorded or synthesized by a sound designer, or they can be sourced from an effects library. A subgroup of effects that includes specific noise sources (e.g., footsteps, doors, etc.) is called a Foley recording and is performed by a Foley recorder. Different types of sounds are labeled and panned accordingly by the recording engineer.
图6示出按照一个实施例的对于自适应音频系统中的后制作过 程的示例工作流程。如图600所示,在最后的混合606期间在配音剧 场中将音乐、对话、福雷录音和效果的单独的声音成分所有放在一起, 并且重录混合者(或多个)604使用预混合(也被称为‘混合减去’) 以及单独的声音对象和位置数据以便以对例如对话、音乐、效果、福 雷录音和背景声分组的方式创建主干。除了形成最后的混合606之外, 音乐和全部效果主干可以被用作创建配音语言版本的电影的基本。每 个主干由基于声道的基础和具有元数据的若干音频对象组成。主干结 合以便形成最后的混合。使用来自音频工作站和混合控制台两者的对 象摇移信息,呈现和主控单元608呈现音频到配音剧场中的扬声器位 置。这个呈现允许混合者听到基于声道的基础和音频对象如何结合, 并且还提供呈现到不同的配置的能力。混合者可以使用有条件的 (conditional)元数据,其对于相关的简档(profile)默认,以便控制内容如何被呈现到环绕声道。以这种方式,混合者保留电影如何在 所有可缩放环境中回放的完全控制。监视步骤可以被包括在重录步骤 604和最后的混合步骤606中的一个或两者之后以便允许混合者听到 并且评价在这些阶段中的每一个期间产生的中间内容。FIG6 illustrates an example workflow for the post-production process in an adaptive audio system, according to one embodiment. As shown in diagram 600, during the final mix 606, the individual sound components of music, dialogue, Foley, and effects are all brought together in the dubbing theater. Re-recording mixer(s) 604 use the pre-mix (also known as a 'mix-down') along with individual sound objects and positional data to create stems by grouping, for example, dialogue, music, effects, Foley, and background vocals. In addition to forming the final mix 606, the music and all-effects stems can be used as the basis for creating a dubbed language version of the film. Each stem consists of a channel-based base and several audio objects with metadata. The stems are combined to form the final mix. Using object panning information from both the audio workstation and the mixing console, a rendering and mastering unit 608 renders the audio to the speaker locations in the dubbing theater. This rendering allows the mixer to hear how the channel-based base and audio objects combine and also provides the ability to render to different configurations. The mixer can use conditional metadata, which defaults to the relevant profile, to control how the content is rendered to the surround channels. In this way, the mixer retains full control over how the movie plays back in all scalable environments. A monitoring step can be included after one or both of the re-recording step 604 and the final mixing step 606 to allow the mixer to hear and evaluate the intermediate content produced during each of these stages.
在主控会话期间,主干、对象和元数据被一起放在自适应音频封 装体614中,其由打印主控器610产生。这个封装体还包含向后兼容 的(遗留5.1或者7.1)环绕声剧场的混合612。呈现/主控单元(RMU) 608可以在需要时呈现这个输出;由此在产生现有的基于声道的可交 付物中消除对任何额外的工作流程步骤的需要。在一个实施例中,音 频文件使用标准材料交换格式(MXF)包装被封装。自适应音频混合 主控文件也可以被用来产生其它可交付物,诸如消费者多声道或者立 体声混合。智能简档和有条件的元数据允许受控的呈现,其可以显著 地减少为创建这种混合所需的时间。During the mastering session, the stems, objects, and metadata are put together in an adaptive audio package 614, which is generated by the print master 610. This package also contains a backward-compatible (legacy 5.1 or 7.1) surround sound theatrical mix 612. The presentation/mastering unit (RMU) 608 can render this output when needed; thereby eliminating the need for any additional workflow steps in generating existing channel-based deliverables. In one embodiment, the audio files are packaged using a standard Material Exchange Format (MXF) wrapper. The adaptive audio mix master file can also be used to generate other deliverables, such as consumer multichannel or stereo mixes. Intelligent profiles and conditional metadata allow for controlled rendering, which can significantly reduce the time required to create such a mix.
在一个实施例中,封装系统可以被用来创建对于包括自适应音频 混合的可交付物的数字电影封装体。音频轨道文件可以被锁在一起以 便帮助防止与自适应音频轨道文件的同步误差。特定领土(territories) 要求在封装阶段期间增加轨道文件,例如,增加听力损害(HI)或者 视力损害叙述(VI-N)轨道到主要音频轨道文件。In one embodiment, a packaging system can be used to create digital film packages for deliverables that include adaptive audio mixes. Audio track files can be locked together to help prevent synchronization errors with the adaptive audio track files. Certain territories may require the addition of track files during the packaging phase, for example, adding a hearing-impaired (HI) or visually impaired narration (VI-N) track to the main audio track file.
在一个实施例中,回放环境中的扬声器阵列可以包括任意数目的 根据建立的环绕声标准放置和指示的环绕声音扬声器。用于准确的呈 现基于对象的音频内容的任意数目的额外的扬声器还可以基于回放 环境的条件被放置。这些额外的扬声器可以由声音工程师设立,并且 这个设立以设立文件的形式被提供到系统,该设立文件由系统使用以 用于呈现自适应音频的基于对象的成分到整个扬声器阵列内的特定 的扬声器或者多个扬声器。设立文件至少包括扬声器指定 (designation)的列表以及声道到单独扬声器的映射、关于扬声器的 分组的信息以及基于扬声器对于回放环境的相对位置的运行时间映 射。运行时间映射通过将基于点源对象的音频内容呈现到最接近声音 工程师意图的声音的感知位置的特定的扬声器的系统的快移特征被 利用。In one embodiment, the speaker array in the playback environment can include any number of surround sound speakers placed and designated according to established surround sound standards. Any number of additional speakers for accurately presenting object-based audio content can also be placed based on the conditions of the playback environment. These additional speakers can be set up by a sound engineer, and this set up is provided to the system in the form of a set up file, which is used by the system to present the object-based components of adaptive audio to a specific speaker or multiple speakers within the entire speaker array. The set up file includes at least a list of speaker designations and a mapping of channels to individual speakers, information about the grouping of speakers, and a runtime mapping based on the relative positions of the speakers to the playback environment. The runtime mapping utilizes the system's snap feature by presenting point source object-based audio content to specific speakers that are closest to the perceived position of the sound as intended by the sound engineer.
图7是按照一个实施例的对于使用自适应音频文件的数字电影 封装处理的示例工作流程的图。如图700所示,包含自适应音频文件 和5.1或者7.1环绕声音频文件两者的音频文件被输入到包装/加密块 704。在一个实施例中,在块706中创建数字电影封装体后,PCM MXF 文件(附加有合适的额外的轨道)使用SMPTE规范根据现有实践被 加密。自适应音频MXF被封装为辅助的轨道文件,并且可选地使用 根据SMPTE规范的对称的内容密钥被加密。这单个DCP 708可以然 后被递送给任何遵从数字电影倡导组织(DCI)的服务器。通常,不 被适当地装备的任何设施将简单地忽略额外的轨道文件,其包含自适 应音频音轨,并且将使用用于标准回放的现有的主要音频轨道文件。 配备有合适的自适应音频处理器的设施将能在可应用时摄取并且回 放自适应音频音轨,根据需要回复到标准的音频轨道。包装/加密组件 704还可以提供直接到分发KDM块710的输入以用于产生合适的安 全性密钥供数字电影服务器之用。其它电影元素或者文件(诸如字幕 714和图像716)可以与音频文件702一起被包装并且加密。在该情 况下,特定处理步骤可以被包括,诸如在图像文件716的情况下的压 缩712。FIG7 is a diagram of an example workflow for a digital cinema packaging process using adaptive audio files, according to one embodiment. As shown in FIG700, an audio file containing both an adaptive audio file and a 5.1 or 7.1 surround sound audio file is input to a packaging/encryption block 704. In one embodiment, after the digital cinema package is created in block 706, the PCM MXF file (with appropriate additional tracks appended) is encrypted according to existing practices using SMPTE specifications. The adaptive audio MXF is packaged as an auxiliary track file and optionally encrypted using a symmetric content key according to the SMPTE specification. This single DCP 708 can then be delivered to any server compliant with the Digital Cinema Initiative (DCI). Typically, any facility that is not properly equipped will simply ignore the additional track file containing the adaptive audio track and will use the existing primary audio track file for standard playback. Facilities equipped with a suitable adaptive audio processor will be able to ingest and play back the adaptive audio track when applicable, reverting to the standard audio track as needed. The packaging/encryption component 704 may also provide input directly to the distribution KDM block 710 for generating appropriate security keys for use by the digital cinema server. Other movie elements or files, such as subtitles 714 and images 716, may be packaged and encrypted along with the audio file 702. In this case, specific processing steps may be included, such as compression 712 in the case of the image file 716.
对于内容管理,自适应音频系统100允许内容创建者创建单独的 音频对象和添加关于能被传送到再现系统的内容的信息。这允许在音 频的内容管理方面有大量灵活性。从内容管理观点看,自适应音频方 法使得能够有若干不同的特征。这些包括通过仅仅代替对话对象来改 变内容的语言以用于空间节省、下载效率、地理的回放适应等。电影、 电视和其它娱乐节目典型地被国际性地分发。这经常要求这块内容中 的语言根据它将被再现在哪里被改变(对于在法国演出的电影的法 语,对于在德国演出的TV节目的德语等)。现今这经常要求创建、 封装和分发完全独立的音频音轨。在自适应音频和它的音频对象的固 有的概念的情况下,对于一块内容的对话可以是独立的音频对象。这 允许在没有更新或者改变音频音轨的其它元素(诸如音乐、效果等) 的情况下容易地改变内容的语言。这不会仅仅应用于外语而且对于特 定观众(例如,孩子的电视演出、航线电影等)的不适当的语言,定向做广告,等等。For content management, the adaptive audio system 100 allows content creators to create separate audio objects and add information about the content that can be transmitted to the reproduction system. This allows for a great deal of flexibility in audio content management. From a content management perspective, the adaptive audio approach enables several different features. These include changing the language of the content by simply replacing the dialogue object for space savings, download efficiency, geographical playback adaptation, etc. Movies, television, and other entertainment programs are typically distributed internationally. This often requires that the language in the content be changed depending on where it will be reproduced (French for a movie set in France, German for a TV program set in Germany, etc.). Today, this often requires creating, packaging, and distributing completely separate audio tracks. With adaptive audio and its inherent concept of audio objects, the dialogue for a piece of content can be a separate audio object. This allows the language of the content to be easily changed without updating or changing other elements of the audio track (such as music, effects, etc.). This would not only apply to foreign languages but also to inappropriate language for specific audiences (e.g., children's television shows, airline movies, etc.), targeted advertising, etc.
设施和设备考虑Facility and equipment considerations
自适应音频文件格式和关联的处理器允许在如何安装、校准和维 护剧场设备方面的变化。在很多更多可能的扬声器输出的引入(每个 被独立地均衡和平衡)的情况下,存在对智能和时间有效的自动房间 均衡化的需要,其可以通过手动地调节任意自动化的房间均衡化的能 力来被执行。在一个实施例中,自适应音频系统使用优化的1/12倍 频带均衡化引擎。高达64个输出可以被处理以便更准确地平衡剧场 中的声音。系统还允许单独的扬声器输出的计划的(scheduled)监视, 从电影处理器输出一直到观众席中再现的声音。本地或者网络警报可 以被创建以便确保采取合适的行动。灵活的呈现系统可以将损坏的扬 声器或者放大器从回放链中自动去除并且围绕它呈现,因此允许演出 继续下去。The adaptive audio file format and associated processors allow for changes in how theatrical equipment is installed, calibrated, and maintained. With the introduction of many more possible speaker outputs (each independently equalized and balanced), there is a need for intelligent and time-efficient automatic room equalization, which can be performed with the ability to manually adjust any automated room equalization. In one embodiment, the adaptive audio system uses an optimized 1/12 octave band equalization engine. Up to 64 outputs can be processed to more accurately balance the sound in the theater. The system also allows for scheduled monitoring of individual speaker outputs, from the cinema processor output all the way to the sound reproduced in the auditorium. Local or network alerts can be created to ensure appropriate action is taken. A flexible rendering system can automatically remove a damaged speaker or amplifier from the playback chain and render around it, thus allowing the show to continue.
电影处理器可以利用现有的8xAES主音频连接、以及用于流式 自适应音频数据的以太网(Ethernet)连接来连接到数字电影服务器。 环绕7.1或者5.1内容的回放使用现有PCM连接。自适应音频数据在 以太网上被流到用于解码和呈现的电影处理器,并且服务器和电影处 理器之间的通信允许音频被识别和同步。在自适应音频轨道回放出现 任何问题的情况下,声音被恢复到Dolby Surround 7.1或者5.1PCM 音频。The cinema processor can connect to the digital cinema server using existing 8xAES main audio connections and an Ethernet connection for streaming adaptive audio data. Playback of surround 7.1 or 5.1 content uses the existing PCM connection. Adaptive audio data is streamed over Ethernet to the cinema processor for decoding and presentation, and communication between the server and cinema processor allows audio to be identified and synchronized. In the event of any issues with adaptive audio track playback, the sound reverts to Dolby Surround 7.1 or 5.1 PCM audio.
虽然已经关于5.1和7.1环绕声系统描述了实施例,但是应当注 意,许多其它现在和将来的环绕配置也可以与实施例结合使用,包括 9.1、11.1和13.1以及更多的。While the embodiments have been described with respect to 5.1 and 7.1 surround sound systems, it should be noted that many other present and future surround configurations may be used in conjunction with the embodiments, including 9.1, 11.1, and 13.1, as well as more.
自适应音频系统被设计成允许内容创作者和展出者两者决定声 音内容要如何在不同的回放扬声器配置中呈现。使用的扬声器输出声 道的理想数量将根据房间尺寸而改变。因此推荐的扬声器布置依赖于 许多因素,诸如尺寸、成分、座位配置、环境、平均的观众尺寸、等 等。示例或者代表性的扬声器配置和布局在本申请中仅仅出于例示的 目的被提供,而不意图限制任何要求保护的实施例的范围。Adaptive audio systems are designed to allow both content creators and exhibitors to determine how sound content should be presented in different playback speaker configurations. The ideal number of speaker output channels used will vary depending on room dimensions. Therefore, the recommended speaker placement depends on many factors, such as size, composition, seating configuration, environment, average audience size, and so on. Example or representative speaker configurations and layouts are provided in this application for illustrative purposes only and are not intended to limit the scope of any claimed embodiment.
对于自适应音频系统的推荐的扬声器布局保持可与现有的电影 系统兼容,其是至关重要的,以便不损害现有的5.1和7.1基于声道 的格式的回放。为了保持自适应音频声音工程师的意图以及7.1和5.1 内容的混合者的意图,现有的屏幕声道的位置不应该在努力加强或者 着重引入新的扬声器位置方面太根本地被改变。与使用所有可用的64 个输出声道对比,自适应音频格式能够在电影院中被准确地呈现到扬 声器配置(诸如7.1),因此甚至允许格式(以及关联的益处)被用 在现有的剧场中而不改变放大器或者扬声器。It is crucial that the recommended speaker layout for the adaptive audio system remain compatible with existing cinema systems so as not to compromise the playback of existing 5.1 and 7.1 channel-based formats. To maintain the intent of adaptive audio sound engineers and the mixers of 7.1 and 5.1 content, the positions of existing screen channels should not be altered too radically in an effort to enhance or emphasize the introduction of new speaker positions. In contrast to using all 64 available output channels, the adaptive audio format can be accurately rendered to the speaker configuration (such as 7.1) in a cinema, thus allowing the format (and its associated benefits) to be used even in existing theaters without changing the amplifiers or speakers.
不同的扬声器位置可以根据剧场设计而具有不同的有效性,因此 目前不存在工业指定的理想的声道的数量或者布置。自适应音频意图 是真正地能适应的并且能够在各种观众席中准确的回放,无论它们具 有有限数量的回放声道或者具有高度灵活的配置的许多声道。Different speaker locations can have varying effectiveness depending on the theater design, so there is currently no industry-specified ideal number or arrangement of channels. Adaptive audio is intended to be truly adaptable and capable of accurate playback in a variety of auditoriums, whether they have a limited number of playback channels or many channels in a highly flexible configuration.
图8是典型的观众席中的供自适应音频系统使用的建议的扬声 器位置的示例布局的俯视图800,并且图9是观众席的屏幕处的建议 的扬声器位置的示例布局的正视图900。在下文中提及的参考位置对 应于在屏幕的中心线上从屏幕到后壁的距离的2/3向后的位置。标准 的屏幕扬声器801被示出在他们的通常的相对于屏幕的位置中。屏幕 面中的仰角的感知的研究已经示出了屏幕后面的额外的扬声器804 (诸如左中心(Lc)和右中心(Rc)屏幕扬声器(在70mm胶片格 式中的“Left Extra”和“Right Extra”声道的位置中))在创建更 平滑的横过屏幕的摇移中可以是有利的。因此推荐这种可选的扬声 器,特别地在具有大于12m(40ft)宽的屏幕的观众席中。所有屏幕 扬声器应该是成角度的使得它们指向参考位置。屏幕后面的亚低音扬 声器810的推荐布置应该保持不变,包括维持相对于房间中心的不对 称的箱布置,以防止驻波的激励。额外的亚低音扬声器816可以被放 置在剧场的后部。FIG8 is a top view 800 of an example layout of recommended speaker positions for use with an adaptive audio system in a typical auditorium, and FIG9 is a front view 900 of an example layout of recommended speaker positions at the screen of the auditorium. The reference position referred to below corresponds to a position on the centerline of the screen, 2/3 of the distance from the screen to the back wall. Standard screen speakers 801 are shown in their usual position relative to the screen. Studies of the perception of elevation angles in the screen plane have shown that additional speakers 804 behind the screen (such as left center (Lc) and right center (Rc) screen speakers (in the positions of the "Left Extra" and "Right Extra" channels in the 70mm film format)) can be beneficial in creating smoother pans across the screen. Such optional speakers are therefore recommended, particularly in auditoriums with screens greater than 12 m (40 ft) wide. All screen speakers should be angled so that they point toward the reference position. The recommended placement of the subwoofer 810 behind the screen should remain unchanged, including maintaining an asymmetric box arrangement relative to the center of the room to prevent the excitation of standing waves. Additional subwoofers 816 may be placed at the rear of the theater.
环绕扬声器802应该独立地布线向后到放大器立柜(amplifier rack),并且在可能时利用根据制造商的规范的扬声器的功率处理匹 配的功率放大的专用声道被独立地放大。理想地,环绕扬声器应该被 指定以对于每个单独的扬声器处理增大的SPL,并且在可能的情况下 还具有更宽频率响应。根据经验对于平均尺寸的剧场,环绕扬声器的 间距应该在2到3m(6'6"到9'9")之间,左和右环绕扬声器对称地 放置。然而,环绕扬声器的间距最有效地被认为是相邻扬声器之间从 给定收听者对向(subtended)的角度,与使用扬声器之间的绝对距离 相反。对于在整个观众席中的最佳的回放,相邻扬声器之间的角距离 应该是30度或更小,从主要的收听区域的四个角落中的每一个参考 得到的。良好的结果可以利用高达50度的间距被实现。对于每个环 绕区域,扬声器应该在可能的情况下维持与座位区域相邻的相等的线 性的间距。超出收听区域的(例如在前排和屏幕之间的)线性的间距, 可以稍微更大。图11是按照一个实施例的顶部环绕扬声器808和侧 面环绕扬声器806相对于参考位置的放置的示例。The surround speakers 802 should be independently wired back to the amplifier rack and, when possible, independently amplified using dedicated channels of power amplification matched to the power handling of the speakers according to the manufacturer's specifications. Ideally, the surround speakers should be specified to handle increased SPL for each individual speaker and also have a wider frequency response where possible. As a rule of thumb for an average sized theater, the spacing of the surround speakers should be between 2 and 3 meters (6'6" to 9'9"), with the left and right surround speakers placed symmetrically. However, the spacing of the surround speakers is most effectively considered as the angle subtended from a given listener between adjacent speakers, as opposed to using the absolute distance between the speakers. For optimal playback throughout the auditorium, the angular distance between adjacent speakers should be 30 degrees or less, referenced from each of the four corners of the primary listening area. Good results can be achieved with spacings of up to 50 degrees. For each surround zone, the speakers should maintain equal linear spacing adjacent to the seating area where possible. The linear spacing beyond the listening area (e.g., between the front row and the screen) can be slightly larger. Figure 11 is an example of the placement of the top surround speakers 808 and the side surround speakers 806 relative to a reference position according to one embodiment.
额外的侧面环绕扬声器806应该被安装得与目前推荐的大约到 观众席的后面的距离的三分之一开始的实践相比更接近于屏幕。这些 扬声器在Dolby Surround 7.1或5.1音轨的回放期间不被使用作为侧 面环绕,但是将使得能够在从屏幕扬声器到环绕区域摇移对象时平滑 转换和改善音色匹配。为了使空间印象最大化,环绕阵列应该被放置 得实际上尽可能地低,受到以下约束:在阵列的前面的环绕扬声器的 垂直布置应该相当地接近于屏幕扬声器声中心的高度,并且足够高以 根据扬声器的方向性维持在座位区域之上的良好的覆盖率。环绕扬声 器的垂直布置应该是如此以至它们形成从前到后的直线,并且(典型 地)向上倾斜,因此收听者上方的环绕扬声器的相对仰角被维持随着 座位仰角增大朝向电影院的后部,如图10所示,图10是在典型的观 众席中的供自适应音频系统使用的建议的扬声器位置的示例布局的 侧视图。实际上,这可以通过为最前方和最后面的侧面环绕扬声器选 择仰角并且将其余扬声器放置在这些点之间的线中而被最简单地实 现。The additional side surround speakers 806 should be installed closer to the screen than the currently recommended practice of starting at about one-third of the distance to the back of the auditorium. These speakers are not used as side surrounds during playback of Dolby Surround 7.1 or 5.1 soundtracks, but will enable smooth transitions and improved timbre matching when panning objects from the screen speakers to the surround areas. In order to maximize the spatial impression, the surround array should be placed as low as practical, subject to the following constraints: the vertical arrangement of the surround speakers in front of the array should be quite close to the height of the screen speaker acoustic center, and high enough to maintain good coverage over the seating area based on the directivity of the speakers. The vertical arrangement of the surround speakers should be so that they form a straight line from front to back, and (typically) tilted upward, so that the relative elevation angle of the surround speakers above the listener is maintained as the seat elevation increases towards the back of the cinema, as shown in Figure 10, which is a side view of an example layout of recommended speaker positions for an adaptive audio system in a typical auditorium. In practice, this can be achieved most simply by choosing elevation angles for the front-most and rear-most side surround speakers and placing the remaining speakers in the line between these points.
为了为每个扬声器提供座位区域之上的最佳覆盖,侧面环绕806 和后面扬声器816和顶部环绕808应该在关于间距、位置、角度等的 定义的准则之下指向剧场中的参考位置。To provide optimal coverage over the seating area for each speaker, the side surrounds 806 and rear speakers 816 and top surrounds 808 should be pointed towards reference positions in the theater within defined guidelines regarding spacing, position, angles, etc.
自适应音频电影院系统和格式的实施例通过为混合者提供强大 的新的创作工具来实现超越当前系统的改善的观众沉浸和约定 (engagement)水平,并且新的电影处理器的特征在于灵活的呈现引 擎,其对于每个房间的扬声器布局和特性来优化音轨的音频质量和环 绕效果。另外,系统维持向后兼容性并且使对当前制作和分发工作流 程的影响最小化。Embodiments of the adaptive audio cinema system and format achieve improved audience immersion and engagement levels beyond current systems by providing mixers with powerful new creative tools. The new cinema processor features a flexible rendering engine that optimizes the audio quality and surround effects of the soundtrack for each room's speaker layout and characteristics. Furthermore, the system maintains backward compatibility and minimizes the impact on current production and distribution workflows.
虽然已经相对于其中自适应音频内容与供数字电影处理系统之 用的胶片内容关联的电影环境中的示例和实现方式描述了实施例,但 是应当注意,实施例也可以被实现在非电影环境中。包含基于对象的 音频和基于声道的音频的自适应音频内容可以与任何有关的内容(关 联的音频、视频、图形等)结合地使用,或者它可以构成独立的音频 内容。回放环境可以是任何合适的收听环境,从头戴耳机或者近场监 视器到小的或大的房间、汽车、露天舞台、音乐厅、等等。While the embodiments have been described with respect to examples and implementations in a cinema environment in which adaptive audio content is associated with film content for use in a digital cinema processing system, it should be noted that the embodiments may also be implemented in non-cinema environments. The adaptive audio content, including object-based audio and channel-based audio, may be used in conjunction with any related content (associated audio, video, graphics, etc.), or it may constitute standalone audio content. The playback environment may be any suitable listening environment, from headphones or a near-field monitor to a small or large room, a car, an open-air stage, a concert hall, and the like.
系统100的方面可以被实现在用于处理数字或者数字化的音频 文件的适当的基于计算机的声音处理网络环境中。自适应音频系统的 部分可以包括一个或更多个网络,其包括任意期望数量的单独的机 器,包括用来缓冲和路由在计算机之间发送的数据的一个或更多个路 由器(未示出)。这种网络可以被建立在各种不同网络协议上,并且 可以是因特网、广域网(WAN)、局域网(LAN)或者其任意组合。 在其中网络包括因特网的一个实施例中,一个或更多个机器可以被配 置为通过web浏览器程序访问因特网。Aspects of system 100 can be implemented in a suitable computer-based sound processing network environment for processing digital or digitized audio files. Portions of the adaptive audio system can include one or more networks comprising any desired number of individual machines, including one or more routers (not shown) for buffering and routing data sent between the computers. Such networks can be built on a variety of different network protocols and can be the Internet, a wide area network (WAN), a local area network (LAN), or any combination thereof. In one embodiment where the network comprises the Internet, one or more machines can be configured to access the Internet via a web browser program.
成分、模块、处理或者其它功能的组件中的一个或更多个可以通 过控制系统的基于处理器的计算装置的执行的计算机程序实现。还应 该注意,在本申请中公开的各种功能可以使用硬件、固件和/或具体实 现在各种机器可读的或者计算机可读的介质中的指令和/或数据的任 意数目的组合来描述,依据他们的行为、寄存器传送、逻辑组件和/ 或其它特性。其中可以具体实现这种格式数据和/或指令的计算机可读 介质包括但不限于各种形式的物理(非暂态的)、非易失性的存储介 质,诸如光学、磁性的或者半导体存储介质。One or more of the components, modules, processes, or other functional components may be implemented by a computer program executed by a processor-based computing device that controls the system. It should also be noted that the various functions disclosed in this application may be described using any number of combinations of hardware, firmware, and/or instructions and/or data embodied in various machine-readable or computer-readable media, depending on their behavior, register transfers, logical components, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, various forms of physical (non-transitory), non-volatile storage media, such as optical, magnetic, or semiconductor storage media.
除非上下文清楚地要求,否则在整个说明书和权利要求中,词“包 括”、“包含”等要以内含的意义被解释,与排他的或者穷举的意义 相反;也就是说,以“包括但不限于”的意义解释。使用单数或者复 数的词还分别包括复数或单数。另外,词“在本申请中”、“在下文 中”、“上面”、“下面”以及类似含义的词指的是这个整个申请而 不是这个申请的任何特别的部分。当在提及两个或更多个项目的列表 中使用词"或者"时,那个词覆盖该词的以下解释中的所有:列表中的 任意项目、列表中的所有项目以及列表中的项目的任意组合。Unless the context clearly requires otherwise, throughout the specification and claims, the words "include," "comprising," and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is, in the sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number, respectively. Additionally, the words "herein," "hereinafter," "above," "below," and words of similar import refer to this entire application and not to any particular parts of this application. When the word "or" is used in conjunction with a list of two or more items, that word covers all of the following interpretations of the word: any item in the list, all of the items in the list, and any combination of items in the list.
虽然已经通过示例的方式并且依据特定的实施例描述了一个或 更多个实现方式,但是应当理解,一个或更多个实现方式不限于公开 的实施例。相反地,它意图覆盖各种修改和类似的布置,如本领域技 术人员会清楚的。因此,所附权利要求的范围应该被给予最宽的解释 使得包括所有这样的修改和类似的布置。Although one or more implementations have been described by way of example and in terms of specific embodiments, it should be understood that one or more implementations are not limited to the disclosed embodiments. On the contrary, they are intended to cover various modifications and similar arrangements as would be apparent to one skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| US61/504,005 | 2011-07-01 | ||
| US61/636,429 | 2012-04-20 | 
| Publication Number | Publication Date | 
|---|---|
| HK1226887A1 HK1226887A1 (en) | 2017-10-06 | 
| HK1226887A HK1226887A (en) | 2017-10-06 | 
| HK1226887Btrue HK1226887B (en) | 2020-03-27 | 
| Publication | Publication Date | Title | 
|---|---|---|
| US12335718B2 (en) | System and method for adaptive audio signal generation, coding and rendering | |
| CN105578380B (en) | It is generated for adaptive audio signal, the system and method for coding and presentation | |
| HK40061842A (en) | System and method for adaptive audio signal generation, coding and rendering | |
| HK40061842B (en) | System and method for adaptive audio signal generation, coding and rendering | |
| HK1226887B (en) | System and method for adaptive audio signal generation, coding and rendering | |
| HK1219604B (en) | System and method for adaptive audio signal generation, coding and rendering |