Artificial Intelligence is a powerful machine-learning technology that's starting to seep into our image editing workflow and it'll soon render some manual editing obsolete, but which will go first?
Though there's a lot of image editing software out there now that features Artificial Intelligence (AI), it's not yet been fully realized by software companies. Skylum has put both feet firmly in the ring by offering what they dub as "the first image editor fully powered by artificial intelligence" and others aren't too far behind. But there's still a long way to go when it comes to perfecting images using machine-learning tech alone. However, there are seven specific edits that we think could easily be replaced in the next few years, so read on to find out what they might be.
Cutting Out Subjects
This one's been around for a little while now, but it's set to get incredibly sophisticated. There are plenty of online sites and image editing software that now offer autonomous subject cut-out. It uses machine learning artificial intelligence (AI) to scan an image, decide where the main subject is and draw a path around it ready for masking. If you haven't used this before, take a couple of seconds to explore it in your favorite image editing software (if it has this feature) or try some of the tools online. For the most part, it does a good job.
However, things start to go awry when you're working with more complex shapes and textures such as hair, trees, or anything with naturally complicated edges. That's not too much bother since most retouchers are used to working with this kind of issue, so we make a selection and then refine it before moving on with our edits. But AI is still in its infancy in image editing.
Give it another five or ten years and we should see massive improvements where complex edges are no longer a factor at play in the cut-out. In fact, taking it to the extremes, we should be able to use a voice-activated assistant to cut out our images. "Alexa, please cut out the woman, the weeping willow tree, and the car passing in the background." It may sound far-fetched now, but this is already somewhat doable. Look at Adobe Photoshop and you can use the Object Selection tool to draw around particular subjects. I could do this myself by drawing three separate boxes around the subjects and having the software take care of the cut-out for me, or, if we set up the software correctly we could have the assistant act as our mouse cursor.
Colorizing and Restoring Old Photos
Another realm of AI-powered editing that's reared its head these past few years is colorizing old monochrome photos. Machine-learning can even refine facial features and remove folds or creases on antique images. Again though, it's not perfect. Colors often have to be helped along with manual input. Masks still need creating for things such as skin tone, clothing, and different textures. This is an area that has massive potential for growth though, as these kinds of edits are time-consuming when done manually.
Removing Distractions
This one's an old one too, using AI to remove distracting elements. Say you have a cityscape scene but you want to remove a couple of people and some signs from the background. We have the editing technology to do this manually, or via AI (just look at Google's new Pixel 6 and 6 Pro — that feature is built right into the phones). But it falters when there are intersecting elements, some of which we want to remove and other bits we want to keep.
For example, someone's hand in the foreground cutting across a car we wish to remove behind - we're likely to see parts of the hand disappear as software struggles to decipher which bits to remove. Soon though, machine-learning will be so intelligent it could remove distractions in the foreground or background, intelligently fill areas required, and extend its reach to reducing and removing light flares by drawing in sections that are overexposed and rebalancing the existing image data.
Keywording Images for Cataloguing
Open up Lightroom and type the word "cat" into the address bar and you'll see that our image editing software is already starting to scan our back catalog for subject types. For now, it's limited in scope to very obvious and specific topics, but soon it'll be able to correctly identify a huge array of things. Boring keywording will become a thing of the past as we rely on the software to automatically return results based on what we search for. "Mountains", "Italy", or "Lamps" will quickly return every photo you have in the library with those things in. We'll be able to include multiple search terms too, "cats on grass" or "the constellation Orion" will make it increasingly easy to find our favorite images.
Complete Edits
Looking forward I'm gonna go out on a limb here and say that machine-learning technology will become so efficient that it'll be able to study previous edits we've made and edit future photographs to mimic what it thinks we'd do. I mean, that's what Artificial Intelligence is all about: informing the machine with data until it has a big enough input that it can start making assumptions itself based on what it's learned. So why would making image edits be any different? There are already types of this floating around, where styles are borrowed from great artists and applied to our images, so why not from ourselves? All it takes is a few hundred of our edits of different genres plumbed into the machine and it'll edit future shots based on how we would want it edited.
Manipulate Depth of Field
We already have the ability to produce shallow depth of field effects through the use of blurring and intelligent cut-out technology in our smartphones and image editing software. However, with AI technology that already exists, I can see this being expanded upon further and in the opposite direction. Foreground and background content in photos will be enhanced to be made clear. This may be combined with camera technology so that it's hard-wired into the shots (this already exists in some cameras such as the Lytro) meaning you can take a shot and choose any kind of depth of field or focus point.
While this technology is out there right now, it's not super sophisticated and it hasn't made it into the mainstream market. As soon as manufacturers introduce this technology to all cameras it'll automate the photo-taking process, meaning an image can be captured and the focus point and/or depth of field can be determined after the fact by art directors or another creative.
Resizing Images
We have a limited ability to up or downsize our raster photographs currently through image editing software, but there will be a huge shift in this technology when AI latches hold. We can see this now in many AI-powered image editing software such as Luminar AI and a little bit in Photoshop's Neural Filters. However, taking this technology to its ultimate endpoint we should be able to resize (or should I say, upsize) an image so massively that it won't matter what resolution we shot at.
This has the potential to cause a paradigm shift in how manufacturers currently sell their cameras. One of the first things we're told is how many megapixels the camera has, because this gives us an idea of how much detail it can capture. But if we can automatically resize images, as long as we have a certain amount of data we won't need the biggest number. This in turn can affect how cameras are made as, generally speaking, if two image sensors of identical size are compared then lower resolution image sensors produce less image noise than their higher-resolution counterparts. If we only need XX megapixels to then upsize using AI, then cameras can come loaded with that specific amount and we can really start breaking some low light shooting barriers.
I have yet to come across an article by this person that I would have liked. Tragic.
Thanks to AI it will write better posts in the the future.
I hear the same about phones... AI, AI, AI... yet I see the new Iphone with bigger pixel pitch... AI not enough...
We’ve got eye, animal etc. af, image stabilisation, ever wider dynamic range, better shadow recovery, auto modes, more and more megapixels and now AI that will do a lot of the work for us. As far as I’m concerned technology is removing a lot of the individual personality from photography. I like to get my compositions right in camera because that is how I’ve always worked and don’t wish to rely on AI and other technologies for any of it.
Then....don't use them. Literally all of the auto modes or AI can be turned off or not utilized at all. You have the power.
I don't use those modes but my point is if people rely on them too much surely they will take away a lot of the individuality from photographs, the human input. People no longer have to get a perfectly framed, perfectly exposed photo at the scene because software can 'correct' all of it for you.
How about you just worry about what you do and not so much what other people do. If you enjoy making it harder than it needs to be and settling for what you end up with, then knock yourself out. And, snap out of your delusional world if you think "corrections" weren't made in the darkroom, also.
Bear in mind, the tech you mentioned didn't exist in Henri Cartier-Bresson days. Who's to say he wouldn't have used any of them.
Harder than what it needs to be and settling?? What, because I personally don’t want to use technology to cut corners? I was merely making an observation on modern AI software, where photography is going, no need to get so triggered by my comments. Besides this is a comments forum. We are encouraged to give our opinions. Also, Henri didn’t believe in editing his own photos. He simply wasn’t interested in it. What came from the negatives after he sent them in to be processed in a lab was his photos. Who’s to say he wouldn’t be the same if he were alive today? We can all speculate.
Well, you're the one getting all annoyed by tech and by those who choose to use them. You act like we are just going to close our eyes and fire away...then, fix in post. lol. Also, think about it. Use your head. Many deliver hundreds/thousands of images per month. It's in their best interest to capture the images so it's less work for their workflow. And, if one ain't got a workflow, they're just an eccentric snapshooter that settles for whatever their camera handed them. Camera does not get the final say, photographer does.
Getting it right in camera! What do you actually mean by that? It’s one of those glib statements thrown around by photographers that actually means nothing. So what do you actually do to ‘get it right’? Shoot jpg and allow some algorithm to make the final decision? Or shoot RAW and end up with a flat image. How on earth can you take an image of a scene that has a dynamic range that exceeds that of your camera and hope ‘to get it right’?
'Get it right' is just an expression that means the more 'right' a photo is in-camera, the less editing it will need during post. In other words, it’s like getting your shot correct on the first try, aiming to have it as perfect as possible straight out of the camera. This works for me as a street photographer and helps me to better see the final shot at the scene rather than spending too much time in post 'correcting all the mistakes'. It, for me is a good discipline. At the end of the day its just a way to minimise the post production editing as much as possible. Bear in mind Henri Cartier-Bresson always aimed to get his compositions 'right' in camera and he sent his films off to be developed as they were and wasn't the least bit interested in developing them himself or enhancing his prints in a darkroom.
When photographing groups of people it would be nice to have AI selecting the shots where all people have their eyes open. AI could also de-select photos that are out of focus.This could help a lot because we now have cameras with high fps.