AI will help phone photos surpass the DSLR, says Qualcomm

by
0 comment 3 minutes read

Android Apps & Games – Android Authority

Today’s best smartphones are photography and videography beasts. Typical high-end devices sport multiple rear cameras across various zoom factors, a ton of horsepower for processing images and videos, and comprehensive camera apps with loads of modes.

We’ve already familiar with technologies like 10x periscope cameras, 8K video recording, advanced object-erasing smarts, and more. What’s next for the industry though? We spoke to Judd Heape, vice-president of product management for cameras at Qualcomm, to look at the future of smartphone photography.

AI to be the foundation for the future?

Credit: Eric Zeman / Android Authority

Smartphone photography has massively harnessed machine learning over the years; it’s used for tasks like noise reduction, object/shadow/reflection removal, reducing video judder, and more. This is clearly going to remain one of the biggest focus areas for smartphone brands and chipmakers alike in the next few years, and Heape feels it will actually be the biggest single focus area. He also outlines a few ways that AI will step up in the coming years.

Going forward in the future, we see a lot more AI capability to understand the scene, to understand the difference between skin and hair, and fabric and background and that sort of thing. And all those pixels being handled differently in real-time, not just post-processing a couple of seconds after the snapshot is taken but in real-time during like a camcorder video shoot.

Fortunately, we’re already seeing advanced, AI-driven image processing running on videos or at video-like frame rates. For example, some phone brands offer real-time viewfinder previews when using features like night modes, while Google uses its HDRNet neural network for video HDR.

AI is one of the biggest developments in the camera space, and Qualcomm camera bigwig Judd Heape thinks it could handle the entire image capture process in the future.

The use of AI has come some way from the modes first introduced in 2018, but Heape explains that AI in photography can be divided into four stages.

The first stage is pretty basic; AI is used to understand a specific thing in an image or scene. The second stage sees AI control the so-called 3A features, namely auto-focus, auto-white balance, and auto-exposure adjustments. The Qualcomm engineer reckons that the industry is currently at the third stage of the AI photography game, where AI is used to understand the different segments or elements of the scene.

We’re three to five years away from reaching the holy grail of AI photography.

This does indeed appear to be the case right now, as technologies like semantic segmentation and facial/eye recognition leverage AI to recognize specific subjects/objects in a scene and adjust them accordingly. For example, today’s phones can recognize a face and make sure it’s properly exposed, or recognize that the horizon is skewed and suggest you hold the phone properly.

Guide: The best camera phones you can get today

Read More

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.