Over 500,000 images have been created in Nvidia’s AI painting application GuaGAN, the GPU company has announced at SIGGRAPH, including those by professional concept artists. The tool takes a user’s simple image, made up of basic colours and brushes within the app, and uses the power of a neural network to create a stunning, near-photorealistic landscape from next to nothing. And if you haven’t tried it out for yourself yet, we think you should – it’s genuinely a lot of fun.
The beta for the AI art tool has been live since earlier in the year on the Nvidia AI Playground. Since then artists of all skill levels have flocked to the app to give it a whirl for themselves, with the green team now bragging some half a million images created on the service. You can try it out here, for free, right in your browser.
Named after post-impressionist painter Paul Guaguin, GuaGAN (the GAN stands for generative adversarial network) utilises deep learning training of over one million images to recreate images in a pragmatic, and undeniably impressive, fashion. You still need to have your head screwed on in the paint stage for good results – drawing clouds in the ocean confuses even the best AI going – but professionals are finding the art produced from the network is good enough, perhaps with a couple tweaks here and there, to synthetically create a bespoke landscape that can be used to contextualise images such as concept art.
“GauGAN popped on the scene and interrupted my notion of what I might be able to use to inspire me,” Colie Wertz, a concept artist and modeler whose work includes Star Wars, Transformers and Avengers, says. “It’s not something I ever imagined having at my disposal.”
“Real-time updates to my environments with a few brush strokes is mind-bending. It’s like instant mood, this is forcing me to re-investigate how I approach a concept design.”
Users can also reverse the process, inputting a landscape into the system and receiving a segmentation map in return. Since the beta’s release, it now also has the option to upload a style filter, which will allow the app to use your inputted reference image to recreate a basic design in the same style.
The core of the app’s functionality relies on the research paper published by Nvidia called ‘Semantic Image Synthesis with Spatially-Adaptive Normalisation”, or SPADE for short, and was developed by Ming-Yu Liu, Taesung Park, Ting-Chun Wang, and Jun-Yan Zhu. The source code for the app is also been publicly released for non-commercial use.
You don’t need an Nvidia RTX GPU fit with Tensor Cores to use the web app, either. This is instead hosted on Amazon Web Services kit fitted with Nvidia GPUs capable of the robo-smarts, so go give it a try.
Header image courtesy of Colie Wertz