At four times the horizontal and vertical resolution of 1080p and sixteen times the overall pixels, 8K images — named for the approximate number of pixels along the horizontal axis — are likely the clearest digital pictures the human eye will ever see.
At four times the horizontal and vertical resolution of 1080p and sixteen times the overall pixels, 8K images — named for the approximate number of pixels along the horizontal axis — are likely the clearest digital pictures the human eye will ever see. And when it comes to TV and visual storytelling, resolution definitely matters. Whether you’re hypnotized by a gorgeous aerial shot of a wild elephant herd or drooling over a close-up of a plated dish, a truly vivid digital image has a way of leaping off the screen and embedding itself in the viewer’s mind.
But some have raised questions about the utility of such high resolution for viewers who, after all, only have eyes of a certain size. Whether 8K has any meaning for human visual processing depends on a system that involves some of the most complex and mysterious structures of the human body. The combined functions of these structures produce a mental experience that scientists are still struggling to map, but every experiment brings us closer to the fascinating truth.
FROM PIXEL TO PICTURE: HOW OUR EYES TURN LIGHT INTO IMAGERY
Light — whether concentrated in pixels or unfiltered as a flood of tiny photons from the 3D, physical world — arrives at the eye in a diffuse, illegible mess. Before the brain can start to sort through the information, the light is caught and refracted by the eye’s internal structures, particularly the natural crystalline lens and a set of “humors.” No jokes here, though — it’s the liquid-like substances that which protect the lens with a watery cushion and give the eye its spherical shape, according to Dr. Lynn Huang, an ophthalmologist and retinal surgeon.
If the visible structures of the eye are like a camera lens, then “the retina is like the film inside the camera,” says Dr. Huang. This thin, delicate organ — with a consistency “like wet toilet paper,” — contains three layers of specialized neurons that do the first round of processing on visual information. Light-sensitive cells in the retina called photoreceptors absorb the photons as they are focused onto the back of the eye.
Each eye’s photoreceptors include around 120 million rods, which react to light intensity, and 6 to 7 million color-sensitive cones. Rods occupy the majority of retinal real estate, but the very center is a tiny, highly concentrated population of cones called the fovea, explains Dr. Huang. As the only photosensitive cells in the human body, the rods and cones are essential for the conversion of visual data into electrochemical signals.
Neurons in the retina can then begin to parse the visual field by registering contrasts in the photoreceptor data. Contrasts — or “edges” — are the basic units of all visual processing, according to Susana Martinez-Conde and Steven Macknick, professors of ophthalmology and neurology and co-authors of the book Champions of Illusion: The Science Behind Mind-Boggling Images and Mystifying Brain Puzzles. “An edge is a difference between two points in space of some kind, either color or light,” explains Dr. Macknick. Once their signals arrive in the brain, these edges will form contour lines around the shapes of objects in the visual field.
Like a camera, the eye must be pointed directly at something in order to see it with as much clarity as possible; even the most powerful lenses can’t capture details with maximum resolution across an entire image. Your eyes can only see in the sharpest resolution, or in 100 percent acuity, in the fovea, a very small fraction of your visual field. “About 0.1 percent of your visual field, at any given time, is the only place you’ve ever had 20/20 vision,” says Dr. Macknick; the rest of the field is “just visual garbage.”
The fact that you don’t notice the rest of the world transforming into a blurry dreamscape every time you glance at your watch is a testament to the sublime engineering in the visual cortex. As you take in the view of a room, your brain sees not only the picture in front of you, but also the images from your most recent involuntary, staccato twitches called saccades. These images, plus your visual memory, together form a mental model of the space around you that is updated with every glance. Thus, even though only a tiny fraction of the field of vision is in focus at any given moment, the entire panorama seems equally sharp, no matter where you’re looking.
This act of neural acrobatics relies on the eye’s ability to redirect its focusing power in any direction. Eyes with less than perfect acuity require assistance from external lenses. Contacts, like a point-and-shoot photographer, move with the center of the eye to keep the ideal light-bending power where it will have the greatest impact, while more static eyeglasses cover the majority of the field of vision with the same magnification to provide clarity at every angle.
Visual acuity — what your optometrist is measuring when she gives you your prescription — is the eye’s version of resolution. Adding glasses or contacts to the eye’s focusing power is akin to upgrading to a higher-resolution screen — sort of. Higher resolution means not just more pixels — that is, more bits of light data — but smaller pixels, because resolution is a measurement of data spread across a given area. With a consistent number of pixels, a greater field of vision, aka a larger screen, would actually translate to worse resolution as the data are diluted across a greater area. Since the upper limit of what the human eye can perceive is controlled by pixel distance, not pixel number, there is no reason to assume that 8K screens go beyond what viewers can appreciate.
But the benefits of 8K have a lot more to do than just expanding screen sizes. Tech reviewers have argued that the increased resolution of 8K screens has the capacity to render images with softer, more realistic edges, which is crucial for the viewer’s perception of depth — or in other words, for that jumping-off-the-screen realism viewers crave from native content. Some have even noted that “the images are so sharp that they look like moving printed photographs; there is absolutely no evidence of pixelation even if your face is an inch from the set.”
Then there’s the strange matter of hyperacuity, one of the most mystifying remaining questions about human visual processing. “Our visual acuity is actually significantly higher than what you would expect, based on both the optics and the circuitry of the eye,” says Dr. Martinez-Conde. In other words, like a detective on a TV police procedural who makes the absurd demand that some poor technician “enhance” the blurry security footage of a crime scene, the visual cortex uses unknown means to create visual information out of thin air. Dan Sasaki, the VP of Optical Engineering at Panavision, discussed in a 2017 presentation that the greater sub-pixels in the image “provides the viewer with much more information from which to render the images in their brains, and this provides a sense of greater depth and more realism.”
So the theoretical limit on how much detail the human eye can actually process may be more of a guideline than rule. Dr. Martinez-Conde points out that the enigma encompasses all types of perception. “Fundamentally,” she adds,” “we don’t understand the neural basis of experience.” One thing is clear, however: The 33 million pixels that 8K TVs are able to display are changing the way we watch television, and making it a truly immersive viewing experience.