During SIGGRAPH 2016, taking place next week in Anaheim, California, Nvidia Research plans to demonstrate methods to improve the quality of foveated rendering in virtual reality. Nvidia believes that its new set of “perceptually motivated” methods offer better image quality than what’s served up in current algorithms for virtual reality. These rendering methods also enable “significant” reduction in rendering cost without sacrificing visual quality.
As a brief explainer, we humans have two vision systems. First, there’s foveal vision that’s sharp, detailed, and our brain’s main visual pipeline of information. Consider that little spot at the back of your retina, the fovea, as a black hole of sorts, a place where your visual perception is the highest and where most of the retinal cones reside.
And then there’s peripheral vision, which provides our brain with secondhand visual information. While extremely useful, peripheral vision lacks the detail we see through foveal vision, supplying mostly colors and movement. It allows us to acknowledge that the weather is rather rainy outside without having to turn our head and look directly out the window.
Thus, these two systems have led to the development of foveated rendering systems that emulate our own eyeball inputs. A Google search will bring up foveated imaging, which essentially blurs the picture in certain areas so that the viewer focuses on a specific point. However, according to Nvidia, foveated rendering systems have been in use for around 20 years.
“A significant reduction in human visual acuity occurs between images forming on the center of the retina (the fovea) and those outside the fovea (the periphery). Various optical and neural factors combine to cause this quality degradation, which increases with distance from the fovea, and is known as foveation. Foveated rendering algorithms exploit this phenomenon to improve performance,” states an overview provided by Nvidia Research (PDF).
According to the Nvidia Research team, current foveated rendering algorithms can disrupt the virtual reality experience by, for example, creating a sense of tunnel vision because the peripheral blur is too heavy. Temporal aliasing is another “artifact” that causes objects to jump or automatically appear because the frame rate in the surrounding scenery is too slow as object “transformation” is taking place.
Aaron Lefohn from the Nvidia Research team recently said that the new foveated rendering methods can reduce pixel shading performance in the peripheral area by double or triple. That performance savings can thus be applied to the virtual realm’s main focal area, providing a more detailed, realistic experience.
To demonstrate the new foveated rendering techniques next week, Nvidia Research will use a prototype based on the second-generation Oculus Rift Development Kit (DK2) and a special high-speed gaze tracker provided and installed by SensoMotoric Instruments. This latter device will run at a 250Hz frequency and have a response latency of 6.5ms.
On the software side, the demonstration will have a real-time rendering testbed based on OpenGL and GLSL. This testbed will render 3D scenes in stereo at 75Hz and use “live” data pumped in through the high-speed gaze tracker. This will allow Nvidia Research to showcase its new foveated rendering techniques against several existing techniques used in VR today.
“Our testbed uses high-quality anti-aliasing techniques to ensure temporal stability in foveated rendering. We enable 8 (times) multisample anti-aliasing, and have implemented a TAA algorithm inspired by Unreal Engine 4 [Karis 2014], modified to work with all of the above techniques,” the team adds.
Of course, we won’t see this new technology in consumer-based VR until headset makers implement eye-tracking technology. That may be a year or two, when hardware component prices drop to a decent level. Until then, you can see the difference Nvidia Research’s new methods made in a blog post right here, and in the video below.