Adobe Has Developed Color Transfer Technology

Adobe Has Developed Color Transfer Technology

Adobe's software bundle is something almost every artist or creative professional in the visual art industries uses. We can't really go without it, and on a personal note, it's like a marriage. Photoshop has done some amazing development with their software, and now, they've teamed up with Cornell University to develop new imaging technology that makes it possible to transfer a photo style from one image to the next, and still make the image look realistic.

Imagine you can take a picture of a cityscape in the day and then change it to night using a different image of a city at night.

To make this possible, it took some deep learning methods and the ability for researchers to capture the color cues and lighting from the reference image. They then also developed a way to keep the colors and light within the parameters set by the reference image.

Examples

Style being applied:

Style being applied:

Style being applied:

This is still in development, but it's amazing to see what the possibilities are and what the photographers, artists, and creative professionals will be able to do in future. 

They've made it possible to view the code in the GitHub repository and you can check out the research paper here.

[via The Next Web]

Wouter du Toit's picture

Wouter is a portrait and street photographer based in Paris, France. He's originally from Cape Town, South Africa. He does image retouching for clients in the beauty and fashion industry and enjoys how technology makes new ways of photography possible.

Log in or register to post comments
9 Comments

Could make life easier to color tone your instagram profile. Curious to see how this will be applied and used in the future

Shouldn't the title read Adobe and Cornell University develop...?

Hey Adobe, how about teaming up again with Cornell to find ways to speed up and improve Lightroom?

I pulled down the Landscape/cloud set of images in this post to do a test with photoshop adjustment "Match Color" and what a difference. The match color feature was so much worse looking. I really hope this new feature makes it to adobe products soon.

The headline makes for an exciting article, but the software isn't ready for prime time yet. For one thing, a lot of the code is tied to the CUDA toolkit, which in turn is tied to Nvidia GPUs. So you can only run it on Linux, on a system with an Nvidia video card that is supported by the CUDA toolkit. But I'm sure Adobe is more than capable to figure all this out. Or Prisma... after all, the code is open source, there are no secrets here.

AI-based Selection mask is all I'm waiting for...

Wait…! You mean the OSS developed, “color mapping,” which DarkTable has had in their OSS for sometime, is re-invented by Adobe (which, “We can't really go without”) in collaboration with Cornell?

http://www.darktable.org/usermanual/ch03s04s05.html.php#color_mapping

Shouldn't the title be, “Adobe/Cornell has re-invented the ‘color transfer’ wheel previously developed in OSS”?

Fine,… less political, no shilling or trolling, “Cornell finally brings color transfer technology to Adobe”. How is that?

(Also, on the point of shills/trolls/et al, change, «…is something almost every artist or creative professional in the visual art industries uses. We can't really go without it, and on a personal note, it's like a marriage,» to «…seems ubiquitous.» At least until Adobe products are available on Linux (which thousands of artist/creative professional in the visual art industries uses).

I pulled both of the images down in the link you mentioned and used photoshops "Match Color" and it gave the same result as they provided. This new method adobe/cornell is teaming up transfers color differently based on the images provided in this article.

Hi William, the images used are from the GitHub repository where they discuss the technology. These were sample images they provided. You can check the link up above if it's not clear.

When I did Image 3 in DarkTable, using their style and their target, I get much better results. Check out the buildings in the bottom right, compared to theirs. I used this example because the article says, «…that makes it possible to transfer a photo style from one image to the next, and still make the image look realistic,» and none of the other examples they showed look realistic to me. Indeed, my results look more realistic than theirs, IMHO. Truth be told, some of their “style” images do not look realistic to begin with.

On the other hand, (could be due to having only a low-res style image), my building on the far left seems to have missed the input. Did not see that until the after posting. :-)

[EDIT (again)]
Also, looking at the details, (see 1:1 images), although I neither had the original full-res style image nor any 14bit/16bit images, I still have far more detail and no “solarisation” artefacts. I think that technology still has a far way to go.