As we watch algorithms and AI do their thing, we’re all wondering where they will take photography. Back in the early days of DSLRs (and still today, if we’re being honest), many professional photographers lamented the rise of the technology that took the high bar of access to the profession down in cost for both the purchase and the practice. Gone were the days of film and the time and skill to develop that film. With the DSLR, anyone with a memory card could now have instant feedback on whether the exposure at hand was good or bad.
Of course, this lower bar brought its own set of problems and challenges that we’re still sorting out. I’ve heard photographers hand-wringing over the glory days of film and how soccer moms with a Canon Rebel ruined the wedding industry. The difference between good enough and great is a tough standard to sell a photography customer and maybe that’s what the grumpy photographer is talking about there.
But we’re past all of that now – for the better, I think. While I love my old film cameras, I’m not grabbing a film camera on most days when the convenience of digital allows me to increase the volume of images with virtually no additional cost.
Smartphones changed the photography game again in the post-iPhone era. We still have this lower bar of quality to answer at the industry level for digital photography. Quick editing apps on a phone took some timesuck away from editing and made casual photos look better with little effort.
And now computational photography and AI are driving photography to a different place again. But is that okay for the art of photography? Is this another film-to-digital transition? Is it like the death of the compact camera in the 2008-2012 era? Or is it a whole new thing?
What is Computational Photography?
At its core, computational photography involves using advanced algorithms and software to process and manipulate image data, extending or enhancing traditional photography techniques. We’re leveraging the power of modern processors, machine learning, and artificial intelligence to achieve results previously thought impossible or impractical. Key algorithms that contributed to this revolution along the way include HDR+ by Google, Deep Fusion by Apple, and EnhanceAI by Skylum.
From HDR Imaging to Simulating Bokeh
One area where computational photography truly stands out is in High Dynamic Range (HDR) imaging. HDR works by intelligently merging multiple exposures, allowing photographers to capture a broader range of tones and colors in a single image. While this technique has been around for years, the proliferation of advanced computational methods has made it more accessible, particularly in smartphones like Apple’s iPhone, Google’s Pixel, and Samsung’s Galaxy lines.
However, the real magic of computational photography lies in its ability to simulate optical phenomena, such as bokeh, through the use of depth-mapping technology and algorithms. Some of us remember struggling with depth mapping plug-ins in Photoshop to add more shallow DOF to images that were otherwise a little too ordinary. Virtually every new smartphone is now capable of producing a convincing bokeh effect by analyzing depth information and applying selective blur to different parts of the image. And they just do it. Automatically! Even the most casual users expect their phone selfies to pop with pleasing bokeh.
Improving Low-Light Performance
Another significant breakthrough in computational photography that we’ve seen over the years is the improvement of low-light performance. Night photography has always been a challenge due to noise and limited dynamic range. However, computational methods, such as multi-frame noise reduction, have significantly improved the quality of images captured in low-light conditions. By stacking multiple exposures and using advanced noise reduction algorithms, cameras like the excellent Sony RX100 line and smartphones like the Samsung Galaxy S21 can now produce cleaner, more detailed images with less noise, even at high ISO settings.
The Blurred Lines Between Computational Photography and AI-Generated Images
As computational photography continues to evolve, we can expect to see even more advanced features that enable photographers to capture images in challenging situations. For instance, AI-driven autofocus systems, which can intelligently track subjects and predict their movements, are a standard feature in all serious cameras from Canon, Nikon, and Sony, among others.
But AI is doing even more. AI now allows us to take the photographer out of the equation and just ask for what we want to see in an image.
Are AI-generated images considered photography, and where do we draw the line between computational photography and AI-generated photos? The artistic lines are undoubtedly blurred, as both techniques involve manipulating image data and employing algorithms to create visually stunning results.
There is definitely a skill to crafting the appropriate prompts for AI-generated images that successfully depict the requestor’s intent. Does a higher level of skill at crafting such a prompt rise to the level of art? If it does, is that photography?
Anddddd there ya go. Could see this coming from miles away. Headshot photographers are poised to become the first genre of photography to become obsolete due to AI. https://t.co/DnUcKPmiuX
— Jeremy Cowart (@jeremycowart) March 17, 2023
You could also argue that the primary distinction between the two lies in the origin of the image data. In computational photography, the initial image is captured by a camera or smartphone, with subsequent enhancements and manipulations applied by the device’s software. On the other hand, AI-generated images are entirely synthesized by algorithms, with no initial photographic input. And then, of course, there is all of the training data from which the AI learned its image-generating skills.
Another controversial example of this blurred line is the Samsung Galaxy S21 Ultra’s moon photography feature. The smartphone has been accused of replacing users’ photos of the moon with built-in or stock images of the moon (but is a little more complicated than that), raising questions about the authenticity of the images and whether they can still be considered photography.
Computational photography is a powerful tool that is pushing the boundaries of what is possible with image capture and processing. As the industry continues to develop, we can anticipate more innovative features that will allow photographers to capture the world in previously unimaginable ways. The blurred lines between computational photography and AI-generated images present fascinating questions about the future of photography as an art form and technology’s role in shaping it.
We’re already seeing the potential for massive impacts from AI-generated images in corporate headshot photography. What’s next?
Aside from the potential impact on the profession of photography, how do computational photography and AI tools impact your outlook on the art and craft of photography?