When you look at a typical home theater display review, there are a few things you are guaranteed to see from any objective analysis. You should see a grayscale chart, showing how accurate and linear it tracks across different values, and you’ll also see a CIE chart showing how accurate the colors are compared to the reference values. You probably will see a gamma chart showing how well it tracks to a certain value (usually 2.2), and you might see contrast readings as well as maximum and minimum light output values. All of this is very useful to someone to know how well a display performs, but there is a lot still missing from this data.
While the grayscale numbers are very useful and the easiest to interpret. Grayscale is always saturated, unlike individual colors, so all you measure with it is various intensities from 0-100%. Measuring at 11 or 21 points lets you see the RGB balance for the whole grayscale and how linear it is across the whole range of intensities. You can see the side effects that adjusting the grayscale at one point can have on other points, and get a good idea of how good the display is for a grayscale overall.
Color is very different, as we measure 6 color points (the primary and secondary colors), once again at 75% or 100% intensity, but then assume the rest of the tracking will be accurate. What if these are only good at these points, but then cause the other values to be off? In many CMS systems, adjusting these to be more accurate can lead to the measured points to be more accurate, but all other saturation values and intensities to be less accurate. Do we then calibrate so the report looks better or so the image looks better? How can we determine which setting is the best balance between the two, and how do we get this across to readers without providing too much data?
One approach that I have used in reviewing PC monitors is to use the Gretag Macbeth color checker chart to evaluate color rendering. Unlike traditional approaches that calibrate certain targets and then measure those targets, which gives you great results, this has you measure targets that you don’t calibrate for. Instead you measure values that represent common natural colors: sky blue, natural greens, skin tones, and other shades you would typically see in real-world content. Using using these as an example lets you see how well adjusting the CMS system does to fix real world examples, instead of just making a test pattern look better. Unfortunately nothing really supports measuring this for AV components yet, but soon something might.
One critical thing that we don’t usually measure and people just assume is a certain way is motion resolution. When you buy a 1080p display, your automatic assumption is that you get all 1080 lines of resolution, all the time, since that is what the specifications say. In reality while all of these displays can usually do 1080 lines of resolution with a static test pattern (though not always), as soon as that pattern starts to move they run into issues. LCD and LCOS panels have a slower refresh speed than a DLP or Plasma, so objects in motion exhibit far more smearing and lack of detail on those technologies. This never stood out as much to me as when I viewed the Sony CrystalLED set at CES this year, which had flawless motion handling with no smearing at all, and OLED is supposed to have very similar performance.
Measuring this is an issue as there isn’t a set method to do so, and the most common method (the FPD Benchmark Blu-ray) isn’t easily available, and only distinguishes between 700 lines and 800 lines, no finer than that. Additionally while motion resolution can be increased on LCD and LCOS technologies by using frame interpolation, those often introduce artifacts that are harder to quantify on if they are worth the trade-off or not. I recently tried a better solution for measuring this, and hopefully will be able to start using it in reviews soon to add another piece of data.
There are many other things not mentioned that could be brought up in reviews but often aren’t. Do people care about gaming lag in projectors, or do most people not play games on them? Do you want to have screen uniformity measured to see if everything is just accurate at the center, or also across the screen? How about 3D Crosstalk numbers? If we are calibrating for a vivid mode with maximum light output, how much color error are you OK with, or do you want it to be as accurate as possible? Can you deal with a dE in your grayscale of 10 on average if it is in vivid mode, and then have a dE of under 3 for your reference settings for night viewing?
Since I write these reviews for readers to help them make better decisions about what to buy, I would like to know these things so I cover what people are concerned with. If you have feedback, please use the comments to let me know and I will make sure to respond to your requests and questions.