iPhone 11 Pro Deep Fusion Camera Tested: Your Questions Answered

BY Steve Litchfield

Published 1 Nov 2019

iPhone 11 Pro Deep Fusion Tested

Apple’s new ‘Mad Science’ image processing upgrade for the iPhone 11 series is now live, as part of iOS 13.2, which rolled out a day or so ago. But there’s still lots of confusion as to what this is – and isn’t. Is it a step forward? Undoubtedly. Will users really notice the difference? Probably not. Do they need to do anything to get the benefits? Well, maybe… I’ll explain.

How Deep Fusion on iPhone 11, iPhone 11 Pro Works

Let’s start with what Apple itself says about the Deep Fusion update:

iOS 13.2 introduces Deep Fusion, an advanced image processing system that uses the A13 Bionic Neural Engine to capture images with dramatically better texture, detail and reduced noise in lower light on iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max.

And later, by way of more explanation:

Pixel-by-pixel the shot is run through an additional four steps, all in the background and basically instantaneously, all working to eek out the most detail. The sky and walls in a given shot work on the lowest band. Hair, skin, fabrics, and other elements are run through the highest band. Deep Fusion will select details from each of the exposures provided to it to pick out the most detail, color, luminance, and tone for the final shot.

The iPhone’s A13 chipset is the most processing power that’s ever been thrown at image capture in real time, with Deep Fusion still taking a second or so on the iPhone 11 – enough to notice when it triggers and there’s a slight delay before the finished photo is ready for viewing. (Though you can carry on snapping, you don’t have to wait for visual or audible confirmation, don’t worry!)

Notice the word ‘triggers’ – most of the time your iPhone 11 Camera application will use its usual Smart HDR system, with multiple exposures combined to give higher dynamic range and less digital noise. But this is a relatively unintelligent process, in that Smart HDR runs on rails and spits out an algorithmic photo without any real sense of any special treatment needed for specific subjects and areas of the image.

And when there’s enough light (so outdoors in day time or in good light indoors) this is all just fine and no extra ‘AI’ is needed. But when light levels drop, say in the evening or indoors in indifferent fluorescent or incandescent lighting, Apple had the idea of using the ‘A13 Bionic Neural Engine’ to look at parts of an image which might then be noisy or uncertain and ‘fix them’, even at the pixel level, by optimising output depending on what the AI thinks each material might be. So wool garments get a different treatment from faces, and different again for wood and then carpet. Each designed to bring out more genuine detail.

Notably, this all happens more often for telephoto shots, by the way, because of the smaller aperture and thus the less the light getting to the sensor, even outdoors. So zoom photos will be improved even more than those taken on the standard or wide angle lenses.

Then when light gets lower still, Apple’s new night mode long exposures take over – it’s not clear whether any Deep Fusion ‘AI’ work is still done, only Apple’s programmers could say definitively. So think of Apple’s camera system being fully automatic in all light levels – there are no modes to turn on or off (HDR, Night, whatever), you just point and shoot. Which is how it should be. Smart HDR or Deep Fusion or Night mode, they form a continuum that the user just doesn’t need to worry about.

Show me examples

Of course, one problem I’ll have proving any of this is that the iPhone 11’s images were already top notch, so am I – are you – even going to be able to tell much difference? Perhaps at the pixel level, yes. I’ll do my best in the examples below:

Here’s a 1:1 crop of a shot of a plaque on a church interior wall, in dim lighting. Firstly the photo without any Deep Fusion processing, and then – below – with the extra second’s worth of pixel level AI:

1:1 crop

Without Deep Fusion

1:1 crop

With Deep Fusion

Hopefully you can just about see improvements in contrast and colour. Though note that it’s not a simple edge enhancement, sharpening exercise, as on many other camera phones – the improvements here come from genuine analysis.

Deep Fusion works best with blocks of texture it recognises though – so skin, material, and so on. Here are similar ‘before’/’after’ 1:1 crops from a low light shot of myself, i.e. a face, snapped by my willing helper (daughter!):

1:1 crop

Without Deep Fusion

1:1 crop

With Deep Fusion

Now, although I can’t deny that the top (non-Deep-Fusion) crop has a certain ‘pointillist’ arty feel, but you can see how the wood and the skin are less digitally noisy – moreover, look how genuine details (ok, ok, imperfections) in the skin are shown, rather than having to endure a blurred, noisy rendition. So you see more detail in the hair below my ear, you get to see a nose hair (arrghh!!), you get to see slight pock marks in the skin’s surface. (Don’t worry, my teeth aren’t really that yellow, the low light had a warm cast, and you’re seeing reflections.)

Finally, here are similar ‘before’/’after’ 1:1 crops from a low light shot of a toy bear, with masses of fur and general texture to ‘lock onto’:

1:1 crop

1:1 crop

Notice how much more genuine (fur) detail there is here, down at the pixel level. And no, it’s not just indiscriminate sharpening or edge enhancement, this is thoughtful AI-driven enhancement at a pixel level, in this case recognising the texture and looking to bring out individual strands of hair/fur. Pretty amazing to see this happening in front of your/our eyes, hopefully our 1:1 crops above show what’s happening better than a web-scaled snap of someone in a pullover (sorry, Apple!)

Should I use it?

If you’ve been paying close attention then you’ll be wondering how I managed to shoot photos without Deep Fusion, given that it’s now automatic and not under user control. Ah – therein lies a little trick, or rather a caveat. The iPhone 11 series launched with a Setting ‘Photos capture outside the frame’, letting you effectively widen a photo out to use detail from a simultaneously captured wide angle lens shot, e.g. restoring a detail that was cropped off by accident at shooting time. When this setting is enabled, Deep Fusion is automatically disabled, since (presumably) there’s not the horsepower or expertise (yet) to do all the pixel-level AI as well as stitching in part of a separate image, captured with different optics from a slightly different angle. Fair enough, I can see the potential complications to an already complex computational process. So, in preparing my test shots, I did one with the ‘Capture outside’ setting enabled and one without.

Easy, eh? In real life, it’s up to you. If you often mess up your framing then I’d suggest foregoing Deep Fusion and keeping the ‘Capture outside’ setting enabled. If, on the other hand, you’re a dab hand at taking perfectly framed photos then turn the setting off and enjoy the higher quality photos in mid-to-low lighting conditions.