Researchers at Nvidia have developed a deep learning model that can take shitty digital sketches – like the ones you’d create with a trackpad, zero talent, and Microsoft Paint – and generate beautiful photorealistic landscapes in an instant.
It’s called GauGAN (named after post-Impressionist Paul Gauguin, and Generative Adversarial Networks), and it works like a simple digital painting tool. But instead of a palette of colors, it has a collection of natural elements, like sand, snow, and sky, each represented by a different hue, to choose from and use in your basic artwork. Draw something, and GauGAN will use that input to create a landscape with your chosen elements in place of your gross paint blobs and clumsy brush strokes.
Nvidia explained that the model has been trained on a million images, and can add in details like reflections in water bodies on its own. Similarly, you can switch a patch of grass to snow, and have nearby trees turn barren, just like in a real winter.
Watch the demo in the clip above to see GauGAN in action, head to Nvidia’s blog to learn more about how the deep learning model works, and click here to read the research paper about this tech. Too bad it’s not available to try online; you’ll need to visit the company’s booth at its upcoming GPU Technology Conference in San Jose this week to give it a go yourself.
TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech extravaganza by clicking here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.