Today, it is common for a small company of 1-3 employees to design, create, and market a product without outsourcing any part of the process. Hiring someone from outside to market or sell the product is not an option for a small organization that must be conscious of how every penny is spent.
Fortunately, a variety of programs can assist a small operation in its promotion and marketing. A new feature in Adobe's Firefly Generative AI service has the potential to make things even easier for individuals and small companies.
The feature, known as Custom Models, is currently available only in the Enterprise version of Adobe Firefly Services. Custom Models works in conjunction with the popular Text-to-Image feature (available to everyone), allowing a user to generate a photograph or artwork using words to describe the desired output. A user might create a prompt that is grounded in reality, such as "dark-skinned girl on a beach at sunset," or one that is fantasy based, such as "cartoon elephant balanced on a beach ball at the top of a hill made of cotton candy." In either case, generating several options takes Firefly only a few seconds. One concern for someone creating marketing materials is that the imagery created by the program must be consistent with the brand's aesthetics. If text is included, these elements' colors, fonts, and tone must also be on brand. Custom Models allow the user to input information about the look and feel of the brand as well as information about the product itself so that anything created by the program is in proper alignment with the brand.
Imagine you are a backpack designer in need of visually exciting imagery of the product for your website. One option would be to book a flight to an exotic destination and take pictures of the bag there. You might need to wake up at 4 am for the best light. Or perhaps you would hike a mountain to get a shot of the bag with an iced-over lake in the background. This approach would be both costly and time-consuming. To keep the imagery fresh, this process will have to be repeated several times per year.
Adobe's new Custom Model offers an easy solution for Enterprise customers. Let's continue with our entrepreneur, who has designed a backpack and wants to use Firefly to create on-brand imagery to promote the product. First, the user would train the program to understand the bag's look, size, detail, materials, and texture. To do so, the user would input 15-30 images of the bag. These photographs must be sharp and properly exposed. Also, the pictures should capture as many angles of the bag as possible.
Once the 30-40 minute training period has been completed, Firefly will have its own AI-generated image of the product. Now, if the user were to enter the prompt, "backpack on an empty beach at sunset", the image that the program creates would incorporate the bag being sold rather than depicting a randomly generated bag. At this time, the feature works best on objects rather than people. It is reasonable to assume, however, that it won't be long before a headshot photographer can input 10 images from their portfolio and use these images to generate AI images of new faces consistent with the photographer's lighting, composition, and posing. These images would be great for marketing usage because the model depicted is not a real person and does not need to agree to commercial use of their image, nor do they need to be compensated for such usage.
A Custom Models user can save the training information to utilize it in all future AI creations. Images generated in Firefly can be exported to Adobe Express to add text or graphics and create advertisements for platforms such as Instagram and Facebook. In Express, the user can save style templates that ensure all team members use the same colors, fonts, and graphic elements in their marketing materials so that all of the imagery created is on brand, no matter which team member creates it. It is worth noting that Adobe trains its AI using photographs for which it is the rights holder. Adobe also makes note of technical aspects such as depth of field, ISO, and focal length of any photograph it uses when creating AI imagery. The program tries to keep its additions looking natural by creating elements with the correct ISO, focal length, and depth of field to blend smoothly with real-world generated elements.
As detailed in a recent article, Adobe allows users to upload a single image for use as a Style Reference. This photograph helps the interface understand the color and tone of the images desired by the user. Users can also upload a Structure Reference that helps the program understand how the subject should be designed or positioned.
Currently, the Custom Model feature described in this article is only available for Enterprise customers, which means it isn't accessible by a self-employed or hobbyist photographer. This technology may trickle down to everyday users in the next year or two.
Take it a step further. AI generated products and customers. Society does not need any climate changing products any longer. We need sustainable solutions. So replace everything with a digital representation and let the simulation run, while we extract the added value.
Fantasy rucksack can be compressed into a eight bit value which can be transferred via database entries between entities of choice. Human society just have to extract the taxes or added value from these transactions and use that as a means of wealth and consumption. It is similar to shtcoins where you just shift bits between each other. The price actions can be simulated via a Gaussian distribution (I did it myself in 2016 for a fantasy stock market) which creates the illusion of actual price movements.
Then just put an AI-bot to explain the price movements. One day it can be a conflict in database east, the next day a strike in bit range 1024-256000. Whatever makes the price movements from day to day seem plausible. Add some random story about an old lady who lost her handbag but the young gentleman returned it to her and the illusion is complete.
We do not even need social media as that can be wholly simulated. Podcasts, video interviews - nothing is real, but the value add is. In 2040 we can have a mandatory injection of a powerful sedative and implement the Matrix once and or all, for science, by science. The science of commerce and ecology.
A blissful AI-generated existence until some mofo goes full Total Recall. The Schwarzenegger one, not the goth chick vampire version.
Value add will never be easier. Take arbitrary 8-bit value, add some percentage value, move new value to other entity, deduct tax amount. Entity instructs machine to consume or transact the new value. Etcetera. Tax value is used for developing better sedatives and happy drugs for the sleepers that never awaken. No real goods are moved which saves the climate.
We only need a massive solar array in the desert and air conditioned bliss-centers. Some sorry suckers needs to live in the real world to take care of the infrastructure while the mass of humanity goes into hibernation. But the ecology will thrive. Forests will expand, all the animals will hump and procreate like it is 19999 BC. We will truly save the planet. The return to the garden of Eden simulation.
lol read the room, John.
Not sure what it is about this article, but I don't understand what Patrik is trying to say and I don't understand what you, Brock, are trying to say, either.