Is Computational Photography Still Photography?

Is Computational Photography Still Photography?

It's incredible to learn about all the technologies that are built into smartphone cameras that weigh as much as a paperclip. But, with all this technology, is it still you taking the picture or are you just a moving tripod carrying a computer around to take the picture for you?

I imagine this is why people still take photos on film. It holds many technologies together, with the film’s layers, the light meter, and the auto-focusing of the camera, but when the shot is taken, there's no stabilization and no HDR Plus to fix the over or underexposed parts of the photo. The shot is what it is. It’s as simple as that. Can that way of photography still be seen as photography or should we partner up with the technology and get the best shots together with perfect exposure, almost no noise, and no blur due to your subject moving?

This video below describes how Google's Pixel 2 camera works, how it stabilizes footage, and how it's tested.

With mirrorless cameras, you can see the photo before you've taken it. Mobile phones come with technologies that make it possible to take photos in the dark. The argument I have is that when I look at those images, am I going to think about what’s captured and the story, or am I going to think about what camera I used and how grateful I was for the tech it comes packaged with?

What I’ve done is bought a Fujifilm X-T2 body and an adapter for my old Nikon lenses, which makes the 50mm come very close to an 85mm. It’s manual focus, and I enjoy this way of shooting. It’s slower, and I really need to see the shot, and I don’t just snap away. The camera has focus-peaking so it tells me when something is supposed to be in focus, but I often have to take the shot quickly, especially when I’m out in the streets of Paris. This makes for some out-of-focus shots and blurring, but it also gives me more opportunities to think about the shot and compose it rather than having to snap away thinking I can fix exposure in post.

One thing I know for sure is that if the mobile phones are taking these steps and making the laymen take great videos and photos, we definitely have to keep improving our work too. What do you guys think? Is the term photography still what I think it is or is it time for me to adapt to computational photography? At the end of a shoot, I suppose it’s whether you’ve got something worthy to show the world. 

Photo by Alex Blăjan on Unsplash.

Wouter du Toit's picture

Wouter is a portrait and street photographer based in Paris, France. He's originally from Cape Town, South Africa. He does image retouching for clients in the beauty and fashion industry and enjoys how technology makes new ways of photography possible.

Log in or register to post comments
13 Comments

A technically perfect image of a boring, poorly composed subject (if there is a subject at all), is not a great photo. I love that technology is making it easier to nail the technical side of photography. But technical perfection does not make a great image. Neither does shooting film, or "slowing down" your process, or using medium format, or larger format, or...

Computational photography is the future. I love it. I can't wait for when I can edit EVERYTHING in post. Edit focus, edit depth of field, etc.....all in post. That'll be fabulous! But that's because I really enjoy editing pictures.

No. That's because you're lazy or don't see the artistic value of a photo (and film photography is "more artistic" than anything digital as well).

This goes back to what Andrew said in the first comment: "A technically perfect image of a boring, poorly composed subject (if there is a subject at all), is not a great photo."

Unless the camera is also a time machine with a teleportation device that also materializes stuff so that I can take photos of a sunrise at noon of objects that don't exist in the frame, I still need to get the light right, I still need to compose properly, find the right location, a good subject, etc...

This eternal discussion on film vs digital is getting really old. People used to "photoshop" photos in the darkroom way before Adobe thought about writing the first line of Photoshop's code. The post processing has only become more accurate, but it still requires knowledge and shooting with that knowledge in mind to get the most out of it. It looks like the post processing can turn a terrible photo into a masterpiece, but this is simply not true.
I could argue that using expired films to obtain specific color tones is cheating because you get weird colors, how's that any different from color correcting a raw file in lightroom?

If someone likes shooting on film, good for him, I totally understand the feeling of it, the manual work that needs to be done, the waiting without knowing if you got some good photos out of a roll, it's a beautiful experience, and a very different approach to photography that requires another mindset.

Take it easy my friend. Geeze...you're calling me lazy and unartistic because I said I liked editing and new technology? Come on now....

There are many ways to capture an image, whether it be drawing, painting, film, or digital. All carries with it a different level of artistic merit, and artistic variances. Being able to do more because of new technology just adds to what can be done. I'm sure that some great photographers will use the new technology in great ways, and others will still take bad, or boring pictures.

Is it a change in the craft of photography? Sure. But it's still about capturing light to convey a message. So what if it's faster and easier. Photography isn't about struggling, it's about the image. By the same token, is it really photography if you're not making your own wet plates and developing them?

You can't fix everything in post, tho. I enjoy computational photography because when I take shots, I take them with how I want to process them in mind. So often, I don't expose them perfectly, I expose them for what elements I want in the photo and how I want it to come out in the end, not out of camera.

No matter how you get from point A to point B - A good picture is a good picture no matter how it was created.

It's all photography, with or without the tech. A good shot is still that, whether it was shot with a potato or the latest gadget. Technology does however make things easier in certain situations when compared to older more "manual" equipment. I'd like to see more transparency in what tools were used to capture the shot. That way, praise for technical mastery can be given where it is due. For instance, take action photography. I give far more respect to the photographer that can nail focus, exposure, composition, and freeze the subject in the right place using a medium format or Polaroid camera vs the guy who just picks a screen grab from their 4k capable smartphone. Or the landscape photographer that shoots one scene for hours only to cut and slice the best bits of lighting into one final
digital composite vs the guy with the 8x10 that waits for that one fleeting moment in time and still manages to nail the shot. Even though the results can be equally as stunning, putting these photos up against each other without disclosing the technique used seems a little unfair.

The guy who shoots in “spray and pray” burst mode is less likely to get the shot than a practiced shooter picking his moment. And the 4K videographer has to recognize the key frame after the fact. No, I’d rather choose my moment in the heat of the action like this -

https://tinyurl.com/ya4ezrgu

Or this -
https://tinyurl.com/y7wnwb86

True, phones are rapidly advancing, but phone shots and videos are inherently more "in the moment" in my opinion so use case is different here. And technology does democratize feature sets enabling more people to shoot better, but that's never taken away photographers skillset or their requirement in the industry, tech can only go so far as to enhance what the lens see's, its' the last sentence that you mentioned has been and will make difference in future too.

Would you rather be out in the field shooting, or in front of a computer screen editing / post-processing?

I spent way too much time in front of a screen when I was designing ICs in Silicon Valley. I’d much prefer being out in the field getting it right in the camera first and minimizing post. But what I do is closer to Ansel Adams’ straight photography most of the time, even shooting starfield pictures like these - https://www.activelightphotography.com/a-night-sky-of-your-very-own

Computational photography by definition says we no longer capture images, only data we manipulate into an image. Yet the minute I’d capture a picture on silver halide grains in Tri-X, it wasn’t really an image either, certainly not in the way my eyes see it. Then I’d do some wet darkroom processing and printing, more manipulation into a final image.

Only the methods have changed, regardless of the label you use. The end result, a 2D image that tells a story, is the same.