Google’s new AI system, entitled Neutral Image Assessment (NIMA), has been trained to score photos both from a technical standpoint and also by how aesthetically pleasing they are.
Writing on the Google Research blog, researchers state that “quantification of image quality and aesthetics has been a long-standing problem in image processing and computer vision.”
What’s significant about this particular development is that although current software can assess technical specs such as noise, blur, and compression entities, this new system can seemingly interpret aesthetic details, something that would ordinarily require a degree of human emotion. Interpreting a photo is subjective, and the way we decipher its appeal will vary based on our own personal experiences and preferences. The breakthrough offers much more of an insight on the current rating system of “high” or “low.”
Our proposed network can be used to not only score images reliably and with high correlation to human perception, but also it is useful for a variety of labor intensive and subjective tasks such as intelligent photo editing, optimizing visual quality for increased user engagement, or minimizing perceived visual errors in an imaging pipeline.
The software rates images out of a score of 10. For the assessment, a “deep neural network” was trained with data labeled by humans. It can make recommendations on photo editing, such as levels of brightness, etc. According to The Verge, the assessment “draws upon reference photos if available, but if not, it uses statistical models to predict image quality.”
The aim with the concept is to one day be able to provide real-time feedback on photography. Potentially it could be an invaluable tool when, for example, it comes to getting a second opinion on which images should be included in your portfolio.
Images via Google Research.