Skip to main content

Google and MIT’s automatic photo software can edit your shots before you take them

Deep Bilateral Learning for Real-Time Image Enhancement
Photo editing software often enhances a smartphone’s limited hardware, but what if those shots could be edited automatically — in real time? That’s the question researchers from the
Recommended Videos
Massachusetts Institute of Technology (MIT) and Google asked when expanding an earlier machine-learning program that automatically improved shots after sending them to the cloud. Presented during this week’s Siggraph digital graphics conference, the new automatic photo software edits so quickly that the user actually sees the results on the screen in real time — before that shot is even taken.

The program is based on earlier MIT research that trained a computer to edit images automatically. Researchers taught that program how to automatically apply specific adjustments by feeding the system five variations of each image, all edited by a different photo editor. After repeating that process a few thousand times, the program learned how to identify and fix common image issues.

The new software is built on the same artificial intelligence platform but speeds up the program to the point where it completes edits in a tenth of the previous time. This allows the live view from the camera to show edits in real time instead of sending the shot to a cloud system for editing. So how did researchers achieve the speed boost? The output of the software actually isn’t an image, but an image formula. Calculating the changes (and only applying them if a photo is taken) speeds up the process.

The original image is also divided into grids, a form of downsizing the image for faster editing without losing pixels in the final image. The program works on each of the divided sections at once, editing hundreds of pixels simultaneously instead of factoring every single pixel.

With those two changes, the program only needs about 100 megabytes to perform each image edit, while without those two modifications the same software needed 12 gigabytes.

The research is a growing trend of looking to computational photography to solve the shortcomings of the hardware that can fit inside of a smartphone. Unlike earlier programs, the automatic photo software from MIT and Google looks to solve one of computational photography’s biggest obstacles, the limited processing power of a mobile device. In the researcher’s experiment, the program applied a high-dynamic range algorithm in real time, boosting the image’s colors and range of light beyond what the hardware itself could achieve.

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Jon Barron, a Google researcher who worked on the project. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Mind-reading A.I. analyzes your brain waves to guess what video you’re watching
brain control the user interface of future eeg headset

Neural networks taught to "read minds" in real time

When it comes to things like showing us the right search results at the right time, A.I. can often seem like it’s darn close to being able to read people’s minds. But engineers at Russian robotics research company Neurobotics Lab have shown that artificial intelligence really can be trained to read minds -- and guess what videos users are watching based entirely on their brain waves alone.

Read more
A.I. can remove distortions from underwater photos, streamlining ocean research
nasa coral reef climate change lush tropical shore and corals underwater

Light behaves differently in water than it does on the surface -- and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear.

Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That’s why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos.

Read more
The best camera phones in 2024: our top 9 photography picks
A person holding the Samsung Galaxy S24 Ultra and Xiaomi 14 Ultra.

In the past decade or so, cameras on smartphones have evolved so much that they can pretty much replace a standalone digital camera for most people. The results you can get on some of the best smartphones these days are just so impressive, and being able to be with you at all times means you'll never miss a moment.

But what if you want the best possible camera phone money can buy? A camera that won't let you down no matter what you're taking a picture of? You've come to the right place. Here are the very best camera phones you can buy in 2024.

Read more