You keep hearing about AI-powered cameras, but it often sounds like the Wizard of Oz is working behind the curtain. Now Samsung posted a fairly detailed explanation of how the Galaxy S21 series uses AI to create impressive portrait shots.
The selfie camera processing is pretty straightforward. First, the software identifies faces in the image and marks them for further processing (this is called segmentation). Second, detail in hair, eyes and facial features is enhanced. The camera also tweaks the white balance to achieve natural skin tones regardless of the ambient light.
Portrait mode is much more involved. It starts with segmentation so later stages know which parts of the image are humans, which are pets and which are just background. This way the correct processing can be applied where needed. This segmentation map is used to create a rough “seed map”.
Next is the “tri-map”, which is important as it highlights the border between the subject and background. Then the matte map traces fine detail within that border – it keeps hair and facial features from blending into the background. Finally, the depth estimation pass calculates the distance to objects, which will be used to create the shallow depth of field effect.
If you’ve encountered less than perfect portrait modes, you’ll know how often strands of hair and other fine detail is blurred into the background, giving the image a very artificial feel. It’s the accuracy of the matte map that sets Samsung’s portrait shots apart from the rest.
Below is an example tri-map (center image), white for the subject, black for the background and grey for uncertain areas. On the right is the resulting matte map, which refines the subject/background separation.
The phone takes the seed maps to apply image enhancements to the subject and blur the background. Then, it adds the matte map to create a sharp division between subject and background. All these layers are processed and combined into the final image in 3 seconds.