Personalization allows the listeners to optimize the sound experience according to their needs. Improved intelligibility, preferred dialogue language selection, immersiveness, flexibility and compatibility with headphones and a multitude of speaker layouts are among the features that characterize a personalized sound experience.
The key to enabling these features is metadata, which is sent alongside the content to describe the intended audio experience as completely as possible and new codecs called NGA (Next Generation Audio) codecs (MPEG-H 3D Audio, AC-4, DTS-UHD).
The best way to learn about NGA is to experience it! Put on your headphones and watch the video below for examples of the personalization and immersion that NGA enables.
(Video credit: Fraunhofer IIS)
(Video credit: Dolby)
The role of the Metadata
To ensure interoperability in workflows, it is important to specify a common standard for the metadata that is used to describe the audio. The Audio Definition Model (ADM) defined in ITU-R BS.2076-1 is such a standard and is based on previous EBU work (EBU Tech 3364). The EBU ADM guidelines aim at helping professional users to understand the ADM and includes relevant examples. Live scenarios are addressed by the serial representation of ADM described in ITU-R BS.2125-0. A standard enabling serial ADM with synchronized audio signals on the AES3 serial digital audio interface will be published by SMPTE (SMPTE ST 2116).
The Audio Definition Model is intentionally kept generic in order to support a wide variety of application areas. Application-specific profiles (such as for production or distribution) are in the process of being defined and will constrain the ADM to simplify implementation and prevent interoperability problems in the production of broadcast programmes delivered using different NGA codecs.
The role of the Renderer
A renderer is like a combination of upmixer and a panel that can attenuate or amplify individual sources and place them in three dimensional space. Sources could be individual objects, complex beds, or scene-based audio.
The renderer is built into the end-user devices and sources are delivered as separate components to the devices. This is the change that makes it possible to personalize the experience to the listener's situation.The renderer is controlled by rendering metadata originating at the mixing desk, but it can interpret that metadata against device capabilities and user preferences, such as dialogue level preference. EBU ADM Renderer (EAR)
The EBU ADM Renderer was developed to provide an open and accepted ADM renderer that can be used by all who wish. It was developed by an alliance of R&D and broadcast organizations – the IRT, BBC, France Télévisions, b<>com and the EBU.
The EAR implementation, which is based on established and open standards and designed to be a reference platform for further development, facilitates creative innovation by allowing the broadcast community to explore and make use of the potential of the technology.
The EBU ADM Renderer was later submitted to the relevant ITU-R study group, where Dolby and Fraunhofer IIS agreed to work with the EAR team to create a wider common system. The original EAR was then extended with features from Dolby and Fraunhofer IIS, with all three elements now forming part of the ITU-R BS.2127 standard.
(Video credit: IRT)