Video

Cable Operators Harmonize Around HDR10 for High Dynamic Range Video

Cable operators HDR10 for high dynamic range video

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Apr 26, 2018

Cable ready to deploy both on-demand and live HDR services

High dynamic range, or HDR, marks the latest in a series of enhancements to dramatically improve the quality of video delivered either live, or on demand, within the cable industry. And, with a proverbial sigh-of-relief, cable operators have recently harmonized around HDR10 as a starting point to begin deploying these services, leaving room for further optimized solutions going forward.

Cable Operators HDR10

Crawl, walk, run.

Think of it as having reached the first stage in a “crawl, walk, run” approach.

HDR is grounded by a simple design objective; i.e. make the darkest parts of the video seem even darker while making the brightest parts seem even brighter. Everything else in the video should scale accordingly.

For the mathematician, this is a simple problem, but for the cable operators, not so much.

More than one way to skin the cat

It turns out that there is a cornucopia of possible solutions to implement HDR. Hence, choosing which one to deploy requires the equivalent of navigating a complex neural network of solution pros and cons that would put to shame even the smartest artificial intelligence algorithm. Then, following that decision-making process, which could result in different answers for each operator, there is the issue of finding content producers that happen to be producing the content in a way to leverage the chosen path. All of this to achieve the goal of delivering high-quality content that will appear consistently regardless of whether it is delivered live or on-demand, or whether it is being viewed on any one of a plethora of devices, including television sets, readers, laptops, phones, and tablets.

But, why so difficult?

Solutions to implement HDR vary, depending on the camera settings used to capture the video, and the settings for the types of displays used to consume the video. Further complicating the issue for cable operators is that the on-demand video workflow is not always identical to the live production video workflow. And, consequently, at any point between video production to video consumption, the video itself is relayed across a network of devices that may not be completely tuned for the chosen solution. To put it concretely, the more complicated the HDR solution is, the easier it is for the video to get mangled in the network.

HDR10 for high dynamic range video

Making matters even worse

Each of the solutions has at least one bonafide standard backing it from a legitimate standards-developing-organization or SDO. SDO’s that have adopted HDR standards include ATSC, ETSI, SMPTE, and MPEG.

From simple to not-so-simple solution designs, how to make HDR work?

The most straightforward way to implement an HDR workflow is to literally modify each pixel of the video with what are called electromagnetic optical transfer curves while the video is being produced. But this has the “con” that small variations in the luminance for each video stream are not necessarily and properly accounted for. Nevertheless, the results are very good. Such designs consistent with this strategy include solutions called: PQ10 and Hybrid Log Gamma (HLG).

A slightly more complex solution will build on the above PQ10 solution to include metadata to signal variations in luminance between video streams. This solution is known as HDR10.

Finally, the comparatively more difficult-to-implement solutions will signal variations between the luminance between each scene in each video stream by providing metadata for each scene prior to the scene itself. Another more difficult-to-implement solution is to encode a standard dynamic range (SDR) signal separately from a side-stream of information to enhance the signal from SDR to HDR.

So, why even bother with the more complicated solutions?

The answer is, as one might expect, that the more difficult-to-implement solutions can produce better results. The complexity for a cable operator is, however, that the additional metadata or side-stream information requires more fine-tuning of all of the underlying network components, both for live and on-demand.

Cable roadmap: start with HDR10 and then optimize

Earlier this year, CableLabs facilitated meetings of its members to study candidate roadmaps, taking into account both near and longer-term solutions for the delivery of HDR services. Following that study of all potential ways-forward, cable operators successfully developed their roadmap, which is: start with HDR10 and build further optimizations into their networks from there. The benefit includes a harmonized approach leaving room for all other solutions, including HLG, as conversion from HDR10 to HLG is a relatively simple process. Other more complex solutions can also be subsequently deployed as the corresponding network components are added to support these services. For content producers, there is the benefit of having a clearly-defined starting point for their cable network business partners.

HDR10 high dynamic range video

Finally, cable operators can send the white-smoke signal; cable is ready to deliver HDR.

--

Want to learn more about HDR in the future? Make sure to subscribe to our blog by clicking below. 


SUBSCRIBE TO OUR BLOG

Technology

Towards the Holodeck Experience: Seeking Life-Like Interaction with Virtual Reality

Towards the Holodeck Experience Seeking life-like Interaction with Virtual Reality

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Sep 5, 2017

By now, most of us are well aware of the market buzz around the topics of virtual and augmented reality. Many of us, at some point or another, have donned the bulky, head-mounted gear and tepidly stepped into the experience to check it out for ourselves. And, depending on how sophisticated your set up is (and how much it costs), your mileage will vary. Ironically, some research suggests that it’s the baby boomers who are more likely to be “blown away” with virtual reality rather than the millennials who are more likely to respond with an ambivalent “meh”. And, this brings us to the ultimate question that is simmering on the minds of a whole lot of people: is virtual reality here to stay?

It’s a great question.

Certainly, the various incarnations of 3D viewing in the last half-century, suggest that we are not happy with something. Our current viewing conditions are not good enough, or … something isn’t quite right with the way we consume video today.

What do you want to see?

Let’s face it, the way that we consume video today is not the way our eyes were built to record visual information, especially in the “real-world”. Looking into the real world (which, by the way, is not what you are doing right now) your eyes capture much more information than the color and intensity of light reflected off of the objects in the scene.  In fact, the Human Visual System (HVS) is designed to pick up on many visual cues, and these cues are extremely difficult to replicate both in current generation display technology, and content.

Displays and content? Yes. Alas, it is a two-part problem. But let’s first get back to the issue of visual cues.

What your brain expects you to see

Consider this, for those of us with the gift of sight, the HVS provides roughly 90% of the information we absorb every day, and as a result, our brains are well-tuned to the various laws of physics and the corresponding patterns of light. Put more simply, we recognize when something just doesn’t look like it should, or when there is a mismatch between what we see and what we feel or do. These mismatches in sensory signals are where our visual cues come into play.

Here are some cues that are most important:

  • Vergence distance is the distance that the brain perceives when the muscles of the eyes move to focus at a physical location, or focal plane. When that focal plane is at a fixed distance from our eyes, let’s say, like with the screen in your VR headset, then the brain is literally not expecting for you to detect large changes in distance. After all, your eye muscles are fixed at looking at something that is physically attached to your face, i.e. the screen. But, when the visual content is produced in a way so as to simulate the illusion of depth (especially large changes in depth) the brain recognizes that there is a mismatch between the distance information that it is getting from our eyes vs. the distance it is trained to receive in the real world based on where our eyes are physically focused. The result? Motion sickness and/or a slew of other unpleasantries.
  • Motion parallax: As you, the viewer, physically move, let’s say walk through a room in a museum, then objects that are physically closer to you should move more quickly across your field of view (FOV) vs. objects that are farther away. Likewise, objects that are positioned farther away should move more slowly across your FOV.
  • Horizontal and vertical parallax: Objects in the FOV should appear differently when viewed from different angles, both from changes in visual angles based on your horizontal and vertical location.
  • Motion to photon latency:. It is really unpleasant when you are wearing a VR headset and the visual content doesn’t change right away to accommodate the movements of your head. This lag is called “motion to photon” latency. To achieve a realistic experience, motion to photon latency must be less than 20ms, and that means that service providers, e.g. cable operators, will need to design networks that can deterministically support extremely low latency. After all, from the time that you move your head, a lot of things need to happen, including signaling head motion, identifying the content consistent with the motion, fetching that content if not already available to the headset, and so on.
  • Support for occlusions, including the filling of “holes”. As you move through, or across, a visual scene, objects that are in front of or behind other objects should block each other, or begin to reappear consistent with your movements.

It’s no wonder…

Given all of these huge demands placed on the technology by our brains, it’s no wonder that current VR is not quite there yet. But, what will it take to get there? How far does the technology still have to go? Will there ever be a real holodeck? If “yes”, when? Will it be something that we experience in our lifetimes?

The holodeck first appeared properly in Star Trek: The Next generation in 1987. The holodeck was a virtual reality environment which used holographic projections to make it possible to interact physically with the virtual world.

Fortunately, there are a lot of positive signs to indicate that we might just get to see a holodeck sometime soon. Of course, that is not a promise, but let’s say that there is evidence that content production, distribution, and display are making significant strides. How you say?

Capturing and displaying light fields

Light fields are 3D volumes of light as opposed to the ordinary 2D planes of light that are commonly distributed from legacy cameras to legacy displays. When the HVS captures light in the natural world (i.e. not from a 2D display), it does so by capturing light from a 3D space, i.e. a volume of light being reflected from the objects in our field of view. That volume of light contains the necessary information to trigger the all-too-important visual cues for our brains, i.e. allowing us to experience the visual information in a way that is natural to our brains.

So, in a nutshell, not only does there need to be a way to capture that volume of light, but there also needs to be a way to distribute that volume of light over a, e.g. cable, network, and there needs to be a display at the end of the network that is capable of reproducing the volume of light from the digital signal that was sent over the network. A piece of cake, right?

Believe it or not

There is evidence of significant progress on all fronts. For example, at the F8 conference earlier this year, Facebook, unveiled its light field cameras, and corresponding workflow. Lytro is also a key player in the light field ecosystem with their production-based light field cameras.

For the display side, there is Light Field Lab and Ostendo, both with the mission to make in-home viewing with light field displays, i.e. displays that are capable of projecting a volume of light, a reality.

On the distribution front, both MPEG and JPEG have projects underway to make the compression and distribution of light field content possible. And, by the way, what is the digital format for that content? Check out this news from MPEG’s 119th meeting in Torino:

At its 119th meeting, MPEG issued Draft Requirements to develop a standard to define a scene representation media container suitable for interchange of content for authoring and rendering rich immersive experiences. Called Hybrid Natural/Synthetic Scene (HNSS) data container, the objective of the standard will be to define a scene graph data representation and the associated container for media that can be rendered to deliver photorealistic hybrid scenes, including scenes that obey the natural flows of light, energy propagation and physical kinematic operations. The container will support various types of media that can be rendered together, including volumetric media that is computer generated or captured from the real world.

This latest work is motivated by contributions submitted to MPEG by CableLabs, OTOY, and Light Field Labs.

Hmmmm … reading the proverbial tea-leaves, maybe we are not so far away from that holodeck experience after all.

--

Subscribe to our blog to read more about virtual reality and more CableLabs innovations.

News

A Coder’s Announcement of the CableLabs C3 Platform for Collaborative Software Development

A Coder’s Announcement of the CableLabs C3 Platform for Collaborative Software Development Arianne Hinds

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Mar 24, 2016

[highlighter line=0]

If ( the sight of software source code does not make you uncomfortable )
{
   /**************************************************************************************
    * Then please continue to read the code below to get the story behind this blog.
    * If reading source code is not your “cup of tea”, then please find the main message
    * for this blog below, i.e. following the big ELSE statement.
    *
    * Copyright:  2016 Cable Television Laboratories, Inc.
    **************************************************************************************/

   enum {FALSE=0, TRUE=1} ;

   int cable-industry-embraces-open-source = TRUE ; 

   int number-of-active-C3-projects = 3 ; 


   while ( cable-industry-embraces-open-source ) do
   { 
      CableLabs stands up the C3 software development platform;

      If ( already-familiar-with-C3 == FALSE ) 
      {
         enum essential-elements-of-C3 = { IT_INFRASTRUCTURE , 
                       ACCESS_CONTROL ,
                       OPEN_SOURCE_BEST_PRACTICES ,
                       CODE_REPOSITORIES , 
                       ISSUE_TRACKER , 
                       BUILD_TOOLS ,
                       VERIFICATION_TESTS ,
                       REVIEW_AND_CHECKIN_TOOLS ,
                       MODULAR_LICENSING ,
                       FLEXIBLE_GOVERNANCE_MODEL ,
                       IPR_MANAGEMENT } ;
                       
            
         C3 is supported by CableLabs ;    
         C3 follows after successful Linux Foundation model ; 
         C3 is a platform for collaborative software development ;
         C3 is scalable to accommodate a large number of projects ;       
         C3 provides a project template charter for anyone to start a new project ;
         Projects can be truly “open” where access is open to anyone ; 
         Projects can also be “closed” where access to code (etc) is restricted ;
         Projects can migrate from C3 to other Open Source and Standards Bodies ;
      }

      For ( int project-number=0; project-number<number-active-C3-projects; project-number++ )
      {

         switch (project-number) {

         case 0:  /* Cisco OpenRPD project */
            RPD is Remote PHY Device ;
            Project seed code donated by CISCO ;
            Key cable industry vendors are participating ;
            
         case 1:  /* Proactive Network Maintenance */
            PNM tools help operators troubleshoot the network ;
            Network devices measure key diagnostic attributes ;
            Identify and isolate problems before they are customer impacting ;
            
         case 2:  /* TruView */
            TruView software characterizes cable plant signaling ;
            Project seed code supplied by Comcast and extended by CableLabs ;
            Advanced video diagnostic tools ;
            

         default:
            C3 has more than 3 projects now ;
         }
            
      }

      If ( (interested-in-learning-more || interested-in-starting-project) == TRUE )
      {

         review project charter form at https://community.cablelabs.com/wiki/display/C3 ;
         send email to c3@cablelabs.com ;
         
      }
   }      
         
} ;
      
ELSE { 
   /*****************************************************************************************
    * We assume that the plain text version of this story is easier for you to read.
    *
    * The relevance of collaborative software development, including open source and  
    * community-source approaches, for the cable industry cannot be overestimated. 
    * Increasingly, collaboratively developed software is being deployed in cable  
    * products and services. And, the development tools that the industry uses to build  
    * those products and services often leverage open source implementations. The RDK  
    * is just one example that demonstrates the robust and powerful approach of sourcing  
    * the software development across an open or semi-open community (i.e. community  
    * source group) for the cable industry.  Other cross-industry organizations where 
    * the cable industry participates include the Open Platform for Network Function 
    * Virtualization  and the OpenDaylight Platform.  
    *
    * So, what’s missing for the cable industry?  
    *
    * ANSWER:  A platform where the cable industry (and beyond) can collectively  
    * collaborate around the development of tools and software assets relevant, 
    * but not limited, to cable.  
    * 
    * At its Winter Conference 2016, CableLabs announced the launch of the 
    * Common Code Community (C3) platform, including essential elements such as IT 
    * infrastructure, development tools, repositories, access control, recommended  
    * best practices, modular licensing, and outreach to other communities and standards. 
    * 
    * As of today, the C3 project hosts three projects:
    * 1.  OpenRPD – A software reference implementation supporting the Remote PHY
    *     Device architecture – a virtualization architectures used in distributed
    *     CCAP implementations.  Seeded with software developed and contributed by 
    *     Cisco, the project drives core router and remote PHY interoperability.
    * 2.  Proactive Network Maintenance (PNM) – A set of tools that reduce troubleshooting
    *     and problem resolution time by detecting and localizing network problems 
    *     before they impact the customer.   
    * 3.  TruView - Software seeded by Comcast and extended by CableLabs as a set 
    *     of tools to assist cable operators in the successful deployment of 2-way 
    *     infrastructure used to load and initialize settop boxes
    *
    * Each project defines its own governance structure such as who is allowed 
    * to commit code, submit code, launch builds, define new code releases, and so on.
    *
    * The C3 platform also provides a flexible license model where each project sets 
    * the licensing and IPR structures most appropriate for its goals and assets.
    *
    * For more information on C3: you can send email to: c3@cablelabs.com
    *
    ******************************************************************************************/

}

Video

The Search for a Royalty-Free Video Codec

The Search for a Royalty-Free Video Codec Arianne Hinds

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Feb 19, 2014

Recently, there have been some discussions in the news about the quest for a royalty-free video codec in key standards organizations.  How did this quest get started and what exactly is a royalty-free video codec anyway?

For starters, a codec is short for “coder-decoder” that is capable of compressing and decompressing large media files containing music or video.  Codecs make it possible to deliver music or video over cable networks, or the Internet for that matter, so that these media files do not consume enormous amounts of bandwidth while being transmitted.

A royalty-free video codec is one that is licensed or otherwise available free of charge for use in some or all applications, i.e., through some form of agreement by the owners of the underlying intellectual property rights that the technology can be licensed without a fee.  As cable set-top boxes collectively represent a large application and use of codecs, a royalty-free codec could be an attractive option versus a royalty-bearing codec.

A royalty-bearing licensing model has sufficed for traditional pay TV service providers because a simple one-time license fee is generally built into the cost of the set-top box.  However, in web- or cloud-based markets there is no singular device owned by the video provider to attach a license fee.  Rather, the video service is typically provided via web browsers that are generally deployed for free.   So, the hunt is on for a royalty-free codec that may be better suited in the Internet driven marketplace for video distribution.

Alliances

Two key standards bodies, the IETF and the W3C, which are responsible for developing the specifications for the Internet and the World Wide Web, have been collaborating on the development of a new real-time communication standard called WebRTC.   More specifically, the W3C has requested that the IETF recommend a video codec that will be mandatory to implement for this new standard and their preference is that it would be a royalty-free codec.  Why?  Because most standards that are developed for the World Wide Web are indeed royalty-free standards.

In response to this request from the W3C, the IETF has launched a project to identify a royalty-free codec.

It is worth noting that for the last decade, MPEG has been exploring the feasibility of a so-called “Type 1” standard for video coding.  What is a Type 1 standard and how is it different from a royalty-free codec?  A “Type 1” standard is a formal standard published by a standards organization for which patent holders are prepared to grant licenses free of charge whereas a royalty-free codec is not necessarily a formal standard; for example, an open source project.  This move by MPEG was seen by some as especially compelling because MPEG traditionally develops high-performance standards that are typically royalty-bearing.  More recently MPEG has actually launched a project to develop a Type 1 video coding standard, intended for Internet applications.

These separate but related efforts have sparked an intense competition as to what will be the preferred royalty-free video codec.  Interestingly, amongst the codecs being considered in both MPEG and the IETF, the two candidates that appear most likely to succeed are the same two video codecs:  VP8 and AVC!

Power in your Corner – Who’s backing which Codec?

VP8 is owned by Google, which has worked with MPEG LA to arrive at an agreement with most of the owners of essential patents so that Google can absorb licensing fees for these patents to help make VP8 royalty-free.  VP8 is available from the WebM open-source project managed by Google.

The MPEG Type 1 AVC codec being considered differs from the current AVC codec that is widely deployed in set top boxes, which is not royalty-free.  How are these AVC codecs different?  It turns out that years ago, when AVC was first being developed, MPEG created the Constrained Baseline profile, which was expected to become a limited and royalty-free version of AVC.  The result was not widely adopted for multiple reasons, including reasons related to the licensing issues for the underlying patents.  The more widely deployed AVC codec is the High Profile for AVC (again, not royalty-free).

One of the major proponents of the royalty free Constrained Baseline AVC codec is Cisco. How far is it willing to go to win this race?

Last year, Cisco boldly announced that it would open source its AVC binary executable and that it would absorb the associated MPEG LA licensing fees so that AVC can be made available for use in the web for free.  That was pretty exciting news, and clearly an indication of lengths Cisco is willing to go to help sway the outcome of this race in favor of AVC.  More recently however, MPEG closely evaluated the visual quality produced by both the Constrained Baseline version of AVC and VP8.  Based on the test results reported at the January 2014 MPEG meeting, MPEG has initiated the steps to formalize VP8 as a new addition to the MPEG-4 suite of standards.  This action by MPEG is a credit to MPEG’s recognition that while it develops world-class standards from the ground-up, some technologies that are developed for specific applications could also be formalized as standards.

Ultimately, a final selection and corresponding announcement one way or the other by MPEG could carry leverage into the codec competition underway in the IETF.  In the meantime, the quest for a royalty-free codec awaits a final outcome.

Dr. Arianne Hinds joined CableLabs in 2012 and is currently responsible for orchestrating the participation of CableLabs in industry consortia and standards developing organizations.  She is an active participant in MPEG, and currently chairs the INCITS L3 Technical Committee,  the parent committee overseeing the participation of the United States in both MPEG and JPEG.