News Stories

EBU publishes new MXF timecode recommendation

The EBU Strategic Programme on the harmonisation and interoperability in file-based HDTV production (SP-HIPS) , has updated the EBU Recommendation on how to use timecode in Material Exchange Format (MXF) files. It is the first public deliverable from the Group.

Most important

MXF is the most important standard for file exchange between professional media organisations. To improve the interoperability of MXF products, end of 2009 the EBU HIPS-MXF Group was set up, led by Mr Christoph Nufer (IRT). Mr Nufer’s team started with updating the existing EBU R 122 “Material Exchange Format Timecode Implementation” recommendation, which was created in 2007. The new documentincludes, amongst others, information on handling 50/60 Hz timecode and timecode with new HDTV essence types.

More to follow

According to EBU Programme Manager Dr Hans Hoffmann, the MXF Group’s work addresses a key aspect of file-based HTV production: “This is one of the ‘lego blocks’ people need to get right, to be able to work with media files flawlessly. Other elements EBU work is focussing on includes acquisition [camera] metadata, new HDTV studio codecs, 1080p/50 and 3G SDI.”

The work of the HIPS-MXF Group now continues with specifying the recommended ways of carrying subtitling in MXF. The draft for this EBU Recommendation is far finished, and already available to participants in the MXF Group for review.

source: http://tech.ebu.ch/Jahia/site/tech/cache/offonce/news/ebu-publishes-new-mxf-timecode-recommend-18nov10

Video Compression Technology (overview/update article)

Video Compression Technology

The MPEG-2 and MPEG-4 standards are now at a relatively mature stage. At the same time, new implementations of MPEG-4 are still on the rise, especially using H.264/AVC. Both ATSC and DVB-T support this more efficient compression standard (with newer receiving devices, such as mobile displays), and newer codecs are emerging in a growing number of video applications.

While MPEG-2 and AVC are now ubiquitous in broadcast, cable and satellite distribution, other codecs have found an equally widespread home for the distribution of video over the Internet. Because we are seeing more applications that cross the various media, it is useful to understand the makeup of these various codecs.

Most Compression Systems Have Similarities

All compression systems function by removing redundancy from the coded information, and the highest amount of compression is almost always achieved by lossy coding, i.e., the decoded information, while presenting a faithful version of the original information does not produce an identical set of data. Essentially, most video codecs today function by reducing the information content of video in three ways: spatially, temporally and logically.

Spatial video content (in the horizontal/vertical image dimensions) is compressed by means of mathematical transforms and quantization. The former remaps the video pixels into arrays that separate out detail information; the latter reduces the number of bits required for each transformed pixel.

Temporal video content (in the time dimension) is compressed by means of residuals and motion estimation, and in some codecs, by quantization as well. Residuals reduce information by coding differences between frames of video, and motion estimation provides data reduction by accounting for the movement of pixel “blocks” (and groups of blocks, i.e., macroblocks) over time.

Logical content (i.e., strings of codewords representing spatial and temporal content) is further compressed by using various forms of entropy coding and/or arithmetic coding, which remove information by efficiently coding the strings in terms of their statistical likelihood of occurrence.

Each MPEG standard is actually a collection of different tools and operating parameters, grouped into levels and profiles. The level typically defines the horsepower needed for decoding the bit stream, as defined in macroblocks per second (or per frame) and the overall video bit rate. Profiles are used to group the different tools used during encoding. For example, MPEG-2 Main Profile @ Main Level is sufficient to encode SD digital TV broadcasts, while MPEG-2 Main Profile @ High Level is needed to encode HD video.

A huge amount of content on the Internet, however, does not use MPEG-2 or AVC coding. YouTube, for instance, almost exclusively uses Flash for video compression. Flash does not use one unique codec, but rather defines a format for FLV files. These files, in turn, encapsulate content usually encoded with either the On2 VP6 or Sorenson Spark video compression algorithms.

VP6, now owned by Google (which also owns YouTube), uses several standard compression techniques: a DCT block transform for spatial redundancy, motion compensation, a loop filter and entropy coding. (The loop filter is used to lower the appearance of block-edge artifacts.) While all of these are present in AVC compression, the loop filtering used in VP6 operates in what can be called a “predictive” manner. Instead of filtering blocks over an entire reconstructed frame, the VP6 codec only filters the edges of blocks that have been constructed by means of motion vectors that cross a block boundary. VP6 also uses different types of reference frames, motion estimation and entropy coding, compared with MPEG.

According to various sources, Sorenson Spark (more specifically the SVQ3 codec “Sorenson Video 3”) appears to be a tweaked version of H.264/AVC and has similarities to an earlier codec, H.263. While VP6 and Spark are essentially incompatible with non-Flash decoders, the most recent releases of Flash Player do support H.264/AVC video and HE-AAC audio.

VP6 and Spark (as well as AVC) are defined by various patents, with differing licensing terms for encoding, distribution and decoding. HTML5 (video) is another codec that has been defined for Internet use. It attempts to simplify (or remove) licensing fees. (The use of HTML5 has recently come to light regarding various video players, with the announcement that Apple would support it, and not Flash video, in its products.) Supporters of HTML5 want a codec that does not require per-unit or per-distributor licensing, that is compatible with the “open source” development model, that is of sufficient quality, and that does not present a patent risk for large companies.

Nonetheless, while HTML5 developers formerly recommended support for playback of video compressed in the Theora format, there is currently no specific video codec defined for it. In May, the WebM Project was launched to push for the use of VP8, a descendant of VP6, as the codec for HTML5. The project features contributions from more than 40 supporters, including Mozilla, Opera, Google, and various software and hardware vendors. Perhaps not coincidentally, in August, the licensor of H.264, MPEG LA, announced that it will not charge royalties for H.264-encoded Internet video that is free to viewers.

New Versions of Codecs

Current codecs are also being improved by means of new and emerging extensions, which have applications for storage and content management. A number of extensions to H.264/AVC support high-fidelity professional applications; scalability and multiview video have also been defined. MPEG collectively refers to the “High” profiles as the “fidelity range extensions” (FRExt), which include the High 10 profile (10 bits per sample), and the High 4:2:2, and High 4:4:4 profiles.

AVC has generally been viewed as providing a doubling of coding efficiency over MPEG-2, but the quest for more efficiency goes on. The ISO/IEC and ITU-T standardization committees have now embarked on the specification of a new video encoding standard that targets improved encoding efficiency for HD video sources.

Again, the goal is to cut the bit rate in half relative to existing codecs, e.g., AVC. This new specification is being referred to as the High-Efficiency Video Coding (HEVC) standard, and the target applications are broadcast, digital cinema, low-delay interactive communication, mobile entertainment, storage and streaming. Depending on the proposed technology, a final standard could be developed by July 2012.

Standards for multiview video coding based on MPEG-2 and H.264/AVC currently exist, but support is generally limited to a single stereo view that requires glasses to view the 3-D content. MPEG is now planning to standardize a new format for 3-D that supplements stereo video with depth/disparity information and could be used more effectively with glasses-free displays.

source:http://3dcinecast.blogspot.com/2010/11/video-compression-technology.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MediaTechnologyIntelligence+%28Media+Technology+Intelligence%29

By Aldo Cugnini, Broadcast Engineering

original source of post: http://www.nxtbook.com/nxtbooks/penton/be1110/#/16

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.