NVIDIA DLSS: deep learning supersampling. You know it, we know it, but let’s not pretend everyone who plays games is also an expert in incredibly complex learning AI algorithms. So what is it?
Introduced in NVIDIA’s RTX 20 Series cards, DLSS represents not just a big step forwards for performance, but a lateral one too. Because in addition to all the number crunching that’s being handled on-die whenever you ask your gaming laptop to render the next frame of, say, Control, Call of Duty: Black Ops Cold War, or Watch Dogs Legion, some of that rendering is enhanced by AI models created by a supercomputer many miles away from your trusty rig.
Which means the results onscreen are pretty surprising for a small form factor card powering a gaming laptop. Performance tradeoffs have to be made by mobile devices in the name of thermal management and the sheer amount of PCB a designer had to work with. Leveraging the power of AI, accelerated by tensor cores, however, is a very literal game changer.
Of course, it’s still anchored by calculations happening on-die. That comes courtesy of the tensor deep-learning AI core. When it’s presented with a complex image, the tensor core effectively has the option to phone a friend. And lucky for you, that friend happens to be a massive supercomputer. A supercomputer that’s been specifically trained to analyse that image in ultra-high fidelity 16K, compare that to low resolution, and create a model to interpolate the differences.
So when the tensor core inside your laptop’s GPU makes this call, essentially what’s happening is it’s accessing the AI model compiled by the supercomputer and then written into an NVIDIA game-ready driver. That model allows the GPU to output crisp, high resolution images much faster than traditional rendering.
What that means for you, happily gorging on ridiculously detailed frames coming at you seamlessly fast, is the ability to run a game at a higher resolution than you’d otherwise be able to without experiencing a performance drop. Rendering an image at 4K natively takes a lot of grunt, but using DLSS to deliver an image close to the same fidelity of native 4K takes just 25-50% of the computational power – the upshot of this is significant performance gains.
To put that in real terms: vast firefights in razor-sharp focus, fought by people in uniforms crisp enough for a museum exhibit in Call of Duty: Black Ops Cold War. Nameless scientific anomalies reflecting pin-sharp off a marble floor in Control’s mysterious agency bureau. Sticking it to the man in a dystopian London that looks, frankly, way better than the real thing via Watch Dogs Legion.
The original implementation of DLSS trained NVIDIA’s supercomputer only in in-game assets, but since then DLSS has been updated and rolled out across RTX cards, and it makes your Tensor core even smarter. Images outside of games are used to inform the deep-learning AI how to analyse and sharpen an image, for an even crisper frame on your gaming laptop screen, bridging the gap even further between the original render resolution and running the game natively at a higher res. The result is better framerates and performance boosts.
Gaming performance is just the start, though. Broadcasting features, such as RTX green screen and RTX AR with NVIDIA Broadcast, use AI to give streamers the tools to enhance the way they engage with their audiences with smart image recognition that can accurately swap out backgrounds, model faces, and apply 3D effects to them. It also uses AI models to clean up background noises picked up by microphones.
And, of course, you can get all this and more in HP’s GeForce RTX-powered gaming laptops – just click the link to take a look.