Photoshop's Generative Fill is possibly the single most talked about addition in the history of the program. This AI-powered feature allows users to add elements to a photograph not present in the original capture. Many creators love the feature. But that may be about to change due to a recently announced change in Adobe's pricing structure.
Generative Fill allows users to create elements of an image not initially captured on camera. Suppose you had photographed someone with a 3/4 length composition. Generative fill can create the bottom 1/4 of your subject. The feature works well when used to create a horizontal image from a vertical or vice versa. If the borders of the image show scenes of a beach, grass, or a solid-colored wall, the new content that Photoshop creates is often impossible to distinguish from the photograph's original content. AI can interpret generic items like waves, leaves, sand, and sky in various ways, and many of these will look correct to the human eye. Generative Fill is less effective at creating something like additional buildings in a NYC skyline scene, since we know what the buildings should look like, and we aren't likely to be fooled by structures that don't match the ones we know to be in the city. Still, the feature is remarkable and has applications for creative and corrective image adjustments.
The feature initially debuted in a beta version of Photoshop a few months ago, but is now a part of the standard version of Photoshop Desktop (September 2023 release). Currently, Generative Fill is free to anyone who has purchased a license for the program, but this will change beginning November 1, when Photoshop will require users to have something known as generative credits to use Generative Fill.
According to Adobe's website, "Generative Fill, Generative Expand, Text to Image, (and) Generative Recolor" will each require 1 credit. Along with the standard "Plans are subject to change" caveat, the site also confusingly states, "Usage rates may vary." Users who license all Creative Cloud apps will have 1,000 free credits each month, while single app users will receive 500 credits per month. Credits will not roll over to the next month. Additional credits will be available for purchase at US $4.99 per month for 100 credits. Adobe also says the tool will still work after exhausting credits, but that the tool will run slower. How slow remains to be seen.
Before this announcement, Photoshop image adjustments were not billed on a per-image basis. Once a user paid the monthly license fee, they were free to make as many adjustments as they desired, no matter how detailed these adjustments may have been. With the introduction of this pricing model, Adobe is making a radical shift in how its subscribers work with Photoshop. Users can no longer experiment endlessly with their images without incurring additional fees. It remains to be seen if this model will extend to their video editing software, Premiere or other Creative Cloud programs in the future.
Full details can be found on Adobe's website.
If I'm reading the Adobe text correctly, the credits simply give you faster access and loading. If you run out of credits, you can still use the features, but at slower speeds. Though, there is mention of a possible "cap", though I'd hope that's reserved for possible power users using the feature for outsource trype work? But I guess we will see in November... 🤔
> Generative credits provide priority processing of generative AI content across features powered by Firefly in the applications that you are entitled to. Generative credit counts reset each month.
That is indeed what it sounds like...
Does that mean that those workstation-grade GPU cards aren't useful for Photoshop's Generative AI because it's all offloaded to the cloud?
(To be sure, for most of us, it'd probably be cheaper to buy Generative credits than to buy a $10k workstation card...)
No matter how you interpret the text we have to agree this isn't a good thing for users.
Wonder how they're going to deal with refunds if it generates crappy results. =P
*When* it generates crappy results.
This sounds similar to buying "jobs" on Midjourney to use the fast GPU. When you run out of jobs then it's possible to buy more or work in slow mode. But working in slow could take all night to execute a single prompt that might only take less than a minute with fast GPU.
I'm no expert but this is probably the result of the decentralized nature of AI in web3. It runs on a shared GPU instead of using our personal computers' processing power like when opening Photoshop. The shared nature of the GPU with others determines how quickly prompts can be executed so if you're working during a time when lots of people are sharing the same GPU then times are slower then if you would be using it alone. That's my understanding but I'm still learning how the space works and I could definitely be wrong. The good news is we might not have to upgrade our own computer specs as often as we did in the past.
These final A.I. altered images aren't photographs. I strongly suggest a different vocabular for computer made images.
How about "fake" imagery vs. non-AI "original" imagery. Orig could still be edited but not AI. NOT. LOL. Only us "old school" care. Nobody else cares. The attitude is if it makes me money..... do it. Absolutely True vs not so true is really hard to tell apart these days and will only get harder. Ok. I'm done. :) On your mark..... set..... go
Digital Art
Fauxtographs
Excellent! The only thing I don't care for is the fact that I didn't come up with it. :-)
Damn! That's a good one. I'm gonna have to borrow that term.
Nt
I agree that they are NOT photographs. Unfortunately, anything that looks "photographic" seems to end up being called a photograph nowadays. I've noticed some popular trading NFTs look strangely like AI could have generated them but they're still categorized as photography on the exchange. Manipulation in post has always been a "crutch" for bad photography but AI really is tailor made for total losers. Nevertheless, I fully expect it will be adopted by most everybody in advertising, portrait and event photography :-)
Because no one ever dodged, burned, etc. in the darkroom. Not all "manipulation in post" is the same.
manipulation vs editing
I was being generous and only talking about manipulation which usually refers to adding or removing elements of the photo. But if we were to be totally honest most "darkroom work" was done to bail out the photographer even if all the original elements remained intact.
I disagree completely. Saying it's "to bail out the photographer" is false as well as arrogant and even pejorative. Everything about photography is subjective, from the choice of subject to the framing, the lens and film selection, exposure settings, camera scene settings (for digital cameras), darkroom processes chosen (even down to the chemicals used and duration of use) or digital equivalents, even down to the selection of papers and inks for printed photographs. At every step in the photographic process, the photographer is making choices that affect the final result; that's the very nature of the process and cannot be escaped.
There is no such thing as the One True Way(tm) to make photographs.
If everything is subjective then my opinion matters just as much as yours so chill out
For me, an AI created image is not a photograph. But an AI alternate image, such as the one I posted with this article, is indeed a real photograph. I did the heavy lifting of approaching this couple on the street and asking to create this image. All AI did was create a vertical version for me.
I've often seen the term "photo illustration" used for heavily-Photoshopped or composited images.
I prefer API - artificial photographic imagery
They better be able to match the resolution of the original image!
Whose photos are Adobe using for its AI to learn from?
Probably the ones we are giving it while this feature is free ;)
Adobe has its own image cloud-based service that is comprised of Images and videos that subscribers consent to upload, and this service provides a revenue stream for those subscribers.
Subscribers when signed up agree to allow Adobe to have the rights to use those images and videos. This service keeps Adobe out of any Copyright issues.
Um, we are paying Adobe. Every. Single. Month.
So this is a subscription in a subscription
"Yo, Dawg; we heard you like subscriptions, so we put a subscription in your subscription!"
Sounds like framing properly is still the #1 way to shoot. For accidental and non frequent issues, may be credit is good enough. I shot a room scene without background this week that a company will drop in a scene and it doesn't appear that they use AI or CGI generated scene. I can't wait to see what they come up with but I guess AI is to random and CGI too time consuming even in 2023.
I like the way adobe only gives you double credits if you pay five time more for the full Creative Cloud apps collection vs a single app. That's so pathetic. For the same competitive price, sorry Adobe, I'll go shop elsewhere and I may even downgrade to a single app since my need for the full creative collection is quite not what it used to be. Thanks for the reminder adobe.
Most people are probably using Generatve Fill for more interesting things that I would use it for. I like being able to extend the image as I did in the samples above, but I woudln't necessarily state that that was the main use of this feature. But it is one that works well with certain types of images.
Yes but that's no longer photography if they enlarge past a necessary percentage. In that case I'm glad they can't copyright the original. What ever "art" they assume creating from there is not relevant to me. Creative fill is kind of the wrong name. You can fill a space to fix an issue or even enlarge slightly to help framing, but doubling a canvas size for creation is a totally different approach to fill and really should be called new canvas.
I suspect that this may change. Ultimately credits cost money, and when you spend credits, you don't know what you are going to get, and might need to do it over and over, so this has all the same elements as loot boxes in video games, making it immediately illegal in big chunks of the world. I could see them saying "spend a credit to unlock ai in this project file" but not "spend a credit every time you want to roll the dice and see what ai makes."
It could also be done where you are only billed when you export the image. They could also have pricing teirs where higher resolution exports are billed higher than low res exports. No matter what, it is bad news for photographers.
I like their Generative Fill, and especially their Generative Expand. Hopefully, this change won't affect me too much since I don't use it massively. Once in a while, I need my vertical 2:3 images transformed to 4:5. As an example, for the mage below, I needed the right side expanded. Generative Expand did a damn good job. It got right the type of leaves/plants, color, shadows, highlights, and even DOF. Someone's images out there is a good teacher. :D
This is my favorite use of the feature. When I use a Leica it works best horizontally. So its nice to have the option to create a vertical in post (if the scene allows for it).
Thats the type of work I was hoping to get done by Generative Expand, once it is available.
Generative Expand is available on the current PS update. I have v25. I think it was released a couple of weeks ago.
It was available on a Beta version of PS that could be downloaded by anyone.
man I'm glad I dumped adobe. I knew it was gunna get bad when CC was released but this is worse than i thought.
How is it worst?