While Bebop Sensors is serving up gloves to virtual and augmented reality headset makers that track hands and fingers, Mindmaze is working on technology to correctly track facial expressions. Simply called Mask, it sits between the user’s face and the virtual reality headset and will be compatible with the HTC Vive, the Oculus Rift, and Samsung’s Gear VR headsets when it possibly launches at the end of 2017.
Mask is comprised of eight, low-cost electrodes mounted within the foam padding of a VR headset. These electrodes detect the electrical impulses of the face, analyzes them via advanced machine learning, and replicates the facial movements on a virtual avatar. This tech supposedly detects expressions 20 to 30 milliseconds before they even physically appear on your face.
However, Mindmaze will not sell Mask directly to users. Instead, it will be sold to headset makers to implement into their own retail kits. Because Mask is a very low-cost solution, the cost of VR headset kits will not see a dramatic price increase. The big cost on Mindmaze’s part was creating software that could understand the data generated by those eight electrodes.
According to Mindmaze CEO Tej Tadi, a computer receiving the eight streams of data registers the rise and fall of each signal like an EKG. Thus, MindMaze had to utilize its neurotech expertise to create an algorithm — a set of rules for calculating a problem or situation — to read the data and pull information to form a set of facial expressions.
“When I strapped on the headset to try Mask for myself, even without any calibration, a range of canned expressions that I made were quickly reflected on the face of an avatar that represented me in the virtual world,” reports Ben Lang from Road to VR. “When I smiled, it smiled. When I frowned, it frowned. It was easily the best calibration-free tech that I’ve seen of this sort.”
That is good news coming from a hands-on experience with a prototype. Lang said this model only supported 10 different facial expressions. On top of that, facial movements were sterile, meaning Mask did not render all the small little facial details that make your expressions unique. Eventually, the team will add more facial expressions along with low-key eye tracking to express basic eye movements.
That said, Mask is still in the prototype stage. The model Lang tested tended to render facial expressions incorrectly and had trouble picking up normal blinking. But that is where machine learning comes in: The algorithm will get smarter over time, making for a more “robust” experience as it understands the facial movements of each user.
While all of this sounds like a lot of computing power, Tadi said that the workload is low enough for the system to run on smartphone-based VR headsets. That is due to the electrode/algorithm combo, which is a stark contrast to the camera-based facial tracking approach on PC.
So when will Mask be ready? There is a good chance it will be integrated into VR headsets by the 2017 holiday season or shortly thereafter.