Coming out of CES 2017, one of the key things you heard thrown around from manufacturers was Color Volume. For the past few years, we’ve heard companies talk about what percentage of the DCI/P3 or Rec.2020 color gamuts they cover, but those are 2D measurements. Color volume adds the third dimension, brightness, to the measurement. So what is color volume, and how should we measure and report on it to best distinguish what a display is capable of?
Color Volume vs. Color Gamut
Understanding the difference between color volume and color gamut requires understanding the differences between SDR and HDR content. With SDR content, your original master was likely completed at around 100 nits (or cd/m2) of brightness. While your display was likely set to a higher level than this for the home, as long as it could display 100 nits it was able to display what was in the master. So when we wanted to determine how many colors a display could show, we would just measure 8 key color points (Black, White, Red, Green, Blue, Cyan, Magenta, and Yellow) and that would tell us how much of that color gamut we could see. It didn’t matter how bright that color is or can get, only that it produced the colors inside that 2D diagram.
When you see charts showing gamut coverage in publications, including here at Reference Home Theater, you see a CIE chart. There are two versions that you’ll see, either the classic CIExy or the CIEuv. What you notice from the names is that there are only two variables there, xy or uv. Those colorspaces actually contain a third variable and are either xyY or LUV. That Y and L are the luminance value, which are not captured in the standard chart but tell you how bright a color is. Color volume uses this missing variable which has always been there but previously was not used to determine what colors a display could show.
HDR content has a much higher peak brightness for content than SDR, and many displays cannot show the brightest highlights in HDR at the levels they should be. Current HDR10 UltraHD Blu-ray discs are mastered with peak light levels of 1,000 nits or 4,000 nits, and Dolby Vision titles are mastered with a peak light output of 4,000 nits but they already talk about targeting 10,000 nits. Now not only can you have a color that is more saturated than before, thanks to the larger Rec.2020 color gamut that this content can use, but instead of being limited to 100 nits of brightness you can have it be far brighter than that!
No display out there today can display these colors at 4,000 nits or 10,000 nits, and many struggle to even do 1,000 nits. Using the traditional gamut measurement techniques we can see that two TVs might cover the same amount of the DCI/P3 or Rec.2020 gamut, but that’s at a single brightness level. What happens when they get HDR content that is brighter than that? This is where color volume comes in.
Color Volume takes far more than the 8 measurements you take for gamut. Now we are measuring between 140 and 393 points, or potentially more. Instead of a single brightness level for a point on the 2D chart we are measuring a range of brightness levels like HDR content will contain. Do we want to measure out to 1,000 nits? Or 10,000 nits? We can do that. By capturing this wide range of measurements, we are collecting data that better represents what a display can do with HDR content than we could before.
With a new measurement technique comes the fact that currently there is no real standard for how to measure and report color volume. There are a number of different ways to measure and report, and each has advantages and disadvantages. All screenshots and data is taken from the upcoming CalMAN 5.8 release, which can measure color volume.
The model that is closest to the way it has been done before is the Lab* method. We specify a peak light level and then measure 140 points to determine gamut coverage of the display with this peak. Like our prior gamut method, this reports a percentage of a specific color volume that is being met. If you’ve seen the new color volume data that Rtings began to put up, this is similar to what they are using. Since you can customize this to have a specific nits target, which can match how content is created, this is a good way to report. How well does this display show 1000 nits DCI/P3 gamut HDR, which lots of streaming and UHD Blu-ray content is? You get a percentage of what gamut is covered. A screenshot below shows this for the Vizio P65-C1 TV, which can display 33.5% of the colors contained in a 1000 nits, DCI/P3 gamut color volume.
This number is not perfect. I measured this with a 10% window, but the P65-C1 is brighter when using a 25% window because of how its full array backlight works. Other TVs might be brighter with a 2% window, like an OLED, because of how they are designed. Since there is no standard for reporting color volume, someone could use any size and report the one that makes a TV look better or worse. We would say that 10% is the standard, but does that correctly reflect what is contained in HDR material, or is a different window more accurate?
These are all good questions that we don’t have a perfect answer for. There is a new pattern, a 4% window with a 40% APL area around it, but that is much brighter than almost all content you’d see. It probably will make LCDs perform better than OLEDs because of the high APL level, but might not have any real-world correlation. When I change to the 4-40 window size, the Vizio jumps from 33.5% coverage of that 1000 nits, DCI/P3 gamut to 34.1%. If I use a 25% window, it goes up to 37.7%. This is a 13% improvement from using a different size window, and we can’t be certain which best reflects the real world.
There is also a relative light level option. This uses the peak light output level of the TV and then reports what percentage of the color volume it can display using that value. This compares the display to an ideal version of that display with the same output. While this might be interesting, since real-world content is mastered using a set level like 1,000 nits or 4,000 nits, it doesn’t have much real world application. When I use this method, the Vizio jumps up to 78.4% since the peak luminance it can produce for HDR content, only around 450-500 nits, isn’t a factor. The fact that it comes up a bit short of the DCI/P3 gamut causes this number to fall below 100%.
Dolby has introduced their own method for reporting on what displays can do for color volume, called Perceptual Color Volume. It uses a different color space and tells you how many millions of different distinguishable colors a display can show. We are not targeting a specific gamut, but instead using 10,000 nits, no preset gamut, and then finding how many colors in there we can display. This gives us a simple, easy to understand number that lets you compare two displays quickly and easily without having to understand as much about gamut size and light levels.
The downside to this measurement is that it doesn’t let you get as specific about how it works with different content. One display might have a technology that gives it a much larger red gamut than DCI/P3, but comes up short in blue, it could report a larger number of potential colors than a display that only covers the DCI/P3 gamut but covers all of it including blue. Now one display can show more colors than the other, but with content today the one with fewer colors might be able to show more. One might do better with the future, but one is better today, but this isn’t well represented by this Millions of Colors number.
How We Will Report
With so many different options available, how do we plan to report on this data? Right now, we plan to take a number of different readings and save them. This is going to let us gather information on these displays and see how the numbers line up to our real-world viewings tests. We will collect color volume percentages at 1000 nits, 4000 nits, and 10000 nits along with the Millions of Colors method from Dolby. We will use 10% windows by default, but also measure with different sizes if we find a display does better with a certain one. Since there is no standard size, we can test as many as we want.
We know that many TVs behave differently depending on the window size. When LG talks about the C7 and other 2017 OLED displays being brighter, the fine print is that they use a 3% window to test that. LCDs like the Samsung Q9 and Q7 or the Sony Z9D usually perform best with a 10% window as it can engage more of the backlight than a 3% one. The Vizio P65-C1 is the rare display that does better with a larger window like 25% than 10%.
We also will only use calibrated modes, or the most accurate out-of-the-box picture mode, when calculating the volume. Using a vivid mode with a larger gamut and higher output might let you measure larger volumes, but it also would be less accurate and you would not want to use it for serious viewing. If you’re reading all of this information on a display, you probably want it to be accurate and we should only report on numbers that you would see in real life.
What is important is that we now have a new tool to use to report on how a display performs with HDR content. We can stop reporting the simple DCI/P3 gamut coverage number and use one of these, which is far more accurate and indicative of real world performance. We’ve seen a number of TVs come out that are “HDR” displays, and can do a larger color gamut, but can’t get that bright. This testing will let us actually show which TVs that are labeled as “HDR” can actually perform well and will show a difference with HDR content and which cannot.
These numbers aren’t perfect, and it will take us time to figure out which is the best to use, but even with the questions we still have about them it will provide an improvement in what we can tell you about how a TV performs with HDR content.