Can a machine be trained to edit images that most people would think were shot by a professional human photographer? That’s what Google is trying to figure out with a machine learning research project.
“Whether or not a photo is beautiful” is a subjection question, and Google wanted to see how machines can be taught subjective concepts. Google’s AI is inarguably like no other, so they created an experimental learning system for artistic content creation. The company soon discovered it had taught itself to edit photographs like a pro photographer would.
The AI roams landscape panoramas found in Google Street View searching for the best compositions. Once it finds a nice-looking photo, it uses post-processing techniques to edit images and improve their look like photographers do in Photoshop or Lightroom.
Edits include cropping, tweaking saturation, applying HDR effects, and adding dramatic lighting with content-aware brightness adjustments.
What’s impressive is that Google’s AI was trained only on a collection of professional quality photos rather than before-and-after pairs. So it had to learn the features that make photos beautiful based purely on finished works. To teach the machine what looks bad, researchers applied random filters to the pro photos to degrade their appearance.
To see how good the resulting AI-generated photos were, Google asked professional photographers to rate a collection of images that included both the machine-generated ones and others with different qualities. Researchers found that photos that were rated highly by their machine were also judged to be “semi-pro” or “pro” at about a 40% rate by real photographers.
Google says one day this technology could be used to help people shoot better photos and edit images better. Check out the Creatism gallery to see them all.
Daniel is an Art Director and Graphic Designer with over a decade of experience in advertising and marketing in the Greater Toronto Area.