At the I/O developer conference on Thursday, Google unveiled a bold plan to bring desktop-grade graphics capabilities to your smartphone — using a few behind-the-scenes tricks.
The technology, called Seurat, takes high-fidelity virtual reality scenes and works some magic to effectively downscale the underlying geometry to the point where a smartphone could render the whole scene in real time.
Google was light on the exact details but through a partnership with Industrial Light and Magic’s internal “Experience Lab” or ILMxLab, we got a chance to see the technology in action and it’s pretty impressive. Taking users into an interactive VR version of a scene from Rogue One: A Star Wars Story, ILMxLab created a lavishly detailed world using high-powered desktop hardware.
Using Google’s Seurat technology, ILMxLab was able to break down the overall polygon count and downscale textures without sacrificing too much in the way of graphical fidelity. While the downscaled version did not look quite as sharp, it only took about 13 milliseconds for a smartphone to render — down from an hour on high-end desktop hardware.
The behind-the-scenes trickery Seurat employs managed to pare down and compress the original scene from one featuring over 50 million polygons to one with just 72,000 — there was a bit of quality lost in the process, which is to be expected. The real goal of this technology is to enable better VR experiences on mobile devices, which possess only a fraction of the power a VR-ready desktop has at its disposal.
As illustrated in the demo, one key component of the technology essentially eliminates background details that are not visible to users, which enables the mobile versions of VR scenes to appear lifelike and high-quality without overtaxing smartphone hardware.
This forward leap in graphical quality is a big deal for Google’s Daydream ecosystem, which it hopes will become the standard for mobile VR and AR experiences.