Zhang and his team completed an automatic recolor software last year. But, while the AI-program was more accurate than previous attempts, it still failed often — a different program by a grad student was only accurate 20 percent of the time. And then, of course, there are objects like clothing that could be any number of colors and still be “correct.”
Now, Zhang and his team have revised the Interactive Deep Colorization system to mix the AI and human input. First, the program automatically generates color suggestions. Then, clicking on a point in the image, the program suggests the most plausible colors for that spot. The user can then choose from those colors (or choose a color of their own in the color grid above) and the system will automatically use that color, still saving time by detecting edges and subtle gradations of the color.
The system was trained by turning 1.3 million color photos into grayscale photos. Using both the grayscale and color version, the researchers taught the Convolutional Neural Network to identify likely colors based on those million photographs in the database. With just one minute of human input, Zhang says the results of the recolored photos improve dramatically.
The team then tested the program by giving 28 users a brief demonstration of the software and asking them to color 10 images, spending only a minute on each photo.
The research was recently published and shared at the Siggraph 2017 computer graphics conference. For the computer-savvy user, the code is available to download on Github.