In the past decade or so, smartphones and social media apps have revolutionized our culture’s relationship to images. From Instagram to Facebook to Pinterest to Youtube, photographs and videos are now so ubiquitous that they have become literally disposable, with apps such as Snapchat trading on their promise to delete your images after a certain period of time. But while smartphones are a very visible driver of this change, what is often forgotten are the huge developments in image-editing software that have supported this revolution—from the HDR built into your smartphone’s camera to the wide range of filters provided by Instagram.
Now, as reported by MIT News, Google and MIT‘s Computer Science and Artificial Intelligence Laboratory may have created another cosmic leap forward: an algorithm that can provide automatic, professional-level image retouching so quickly that you can see a preview before even snapping the photograph.
Presented last week at digital graphics conference Siggraph, the algorithm is designed to improve values such as an image’s contrast, color-balance, saturation, and brightness—in effect, to correct everything that a professional photographer might adjust to produce a natural-looking but high-quality image. The algorithm is far more sophisticated than existing in-camera technologies such as filters, which apply changes either uniformly or depending on simple information such as a pixel’s brightness, and far quicker than editing a photo manually after it’s taken.
The process works thanks to a machine learning system, which analyzed an input of thousands of photographs to understand how they were transformed by a professional editor. This learning then feeds into an algorithm which allows complex edits to be made in just milliseconds, allowing the changes to be displayed in real-time as you take the photograph.
To find out how the algorithm works in greater detail, read the article on MIT News or watch the video below.
News via MIT News.