Skip to main content

Google Brain brings ‘zoom and enhance’ method one step closer to reality

google brain zoom enhance pixel recursive super resolution
Image used with permission by copyright holder
The concept of enhancing a pixelated image isn’t new — “zoom and enhance” is responsible for dozens of criminals being put behind bars in shows like Criminal Minds, but that kind of technology has so far evaded the real world. Well, the boffins over at Google Brain have come up with what may be the next best thing.

The new technology essentially uses a pair of neural networks, which are fed an 8 x 8-pixel image and are then able to create an approximation of what it thinks the original image would look like. The results? Well, they aren’t perfect, but they are pretty close.

Recommended Videos

To be clear, the neural networks don’t magically enhance the original image — rather, they use machine learning to figure out what they think the original could have looked like. So, using the example of a face, the generated image may not look exactly like the real person but instead, a fictional character that represents the computer’s best guess. In other words, law enforcement may not be able to use this technology to produce an image of a suspect using a blurry reflection from a photo of a number plate yet, but it may help the police get a pretty good guess at what a suspect may look like.

As mentioned, two neural networks are involved in the process. The first is called a “conditioning network,” and it basically maps out the pixels of the 8 x 8-pixel image into a similar looking but higher resolution image. That image serves as the rough skeleton for the second neural network, or the “prior network,” which takes the image and adds more details by using other, already existing images that have similar pixel maps. The two networks then combine their images into one final image, which is pretty impressive.

It is likely we will see more and more tech related to image processing in the future — in fact, artificial intelligence is getting pretty good at generating images, and Google and Twitter have both put a lot of research into image enhancing. At this rate, maybe crime-show tech will one day become reality.

Christian de Looper
Christian de Looper is a long-time freelance writer who has covered every facet of the consumer tech and electric vehicle…
Zoom debuts its new customizable AI Companion 2.0
overhead shot of a person taking a zoom meeting at their desk

Zoom unveiled its AI Companion 2.0 during the company's Zoomtopia 2024 event on Wednesday. The AI assistant is incorporated throughout the Zoom Workplace app suite and is promised to "deliver an AI-first work platform for human connection."

While Zoom got its start as a videoconferencing app, the company has expanded its product ecosystem to become an "open collaboration platform" that includes a variety of communication, productivity, and business services, both online and in physical office spaces. The company's AI Companion, which debuted last September, is incorporated deeply throughout Zoom Workplace and, like Google's Gemini or Microsoft's Copilot, is designed to automate repetitive tasks like transcribing notes and summarizing reports that can take up as much as 62% of a person's workday.

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
How AI has quietly transformed this one camera feature on your phone
The title image for the Outtafocus column, showing the Google Pixel 9 Pro's camera.

All the latest AI camera features in the Google Pixel 9 Pro, Google Photos, and a host of other recently launched phones and devices got me thinking. I quite enjoy some of them and can’t deny the usefulness of Magic Eraser.

Still, I wonder how many will have the longevity of a camera feature I’ve watched steadily improve to become so much more exciting over the last few years, and where AI is working behind the scenes. I’m talking about our phones' telephoto and digital zoom features, where AI is a mostly silent, but critical part of the story.
Telephoto cameras on phones
Huawei Pura 70 Ultra (top left), Huawei P30 Pro, iPhone 15 Pro Max, Google Pixel 3a, Google Pixel 9 Pro, and Samsung Galaxy S24 Ultra Andy Boxall / Digital Trends

Read more