Researchers from the University of South California, Pinscreen, and Microsoft have developed a hair rendering technique powered by deep learning. This neural network can render 3D hair models from only a 2D reference image, and is the first of its kind to work in real-time.
If you’ve ever turned on Nvidia’s HairWorks in games such as The Witcher 3 or Final Fantasy XV, you may well have noticed your in-game performance drops significantly – even if it’s just Geralt’s lovely mop on the screen. Rendering a couple hundred thousand individual strands of hair is no walk in the park.
Want the very best in graphics performance? Here are the best 4K graphics cards available in 2018.
But AI researchers believe a convolutional neural network may have a use for this demanding task. Neural networks follow the same concept as the brain, connecting up various nodes in certain ways, across many layers, to classify information from an input into various groups. These deep-learning systems have a variety of uses, with image recognition and identification finding new use within rendering techniques.
“Realistic hair modeling is one of the most difficult tasks when digitizing virtual humans,” the researchers say. “In contrast to objects that are easily parameterisable, like the human face, hair spans a wide range of shape variations and can be highly complex due to its volumetric structure and level of deformability in each strand.”
To initially teach the neural network, the researchers fed it with a dataset of 40,000 different hairstyles and 160,000 2D orientation images from random viewpoints. The neural network can then reproduce 3D rendered hair, in various styles, lengths, and colours, from a single 2D image. In milliseconds. It can also mimic video and render movement across individual strands of hair – all interacting with each other.
Here’s a video of real-time hair reconstruction in action. Just try and ignore the model’s eerie smile and maybe you’ll be able to sleep tonight.
“The hair from our method can preserve better local details and looks more natural,” the researchers say. “Especially for curly hairs”.
It’s not a perfect system, and some hairstyles are poorly replicated by the system. However, the researchers believe expanding the training dataset with even more hairstyles may aid in replication of a wider range of hairdos.
This AI tech could have an impact on in-game hair rendering, or maybe even as a part of an inevitable AI-powered GameWorks 2.0 suite – whenever that happens. The current gen might not be cut out for the job – this demo was running across multiple Nvidia Titan Xp graphics cards – but if Nvidia’s AI Tensor Cores make it into the GeForce GTX 1180 there may be hope for real-time AI implementations in games as soon as even the next generation of GPUs.
In the meantime, this tech could have wide ranging implications for how game developers go about creating their next games. Devs can skip any tedious modelling and take realistic hair models straight from motion capture, leaving the heavy lifting to a trusty neural network.