We’re taking it none of you have forgotten the UE5 tech demo yet. Who could have? Epic Games, in a press release, said that with the upcoming engine the company is aiming to “achieve photorealism on par with movie CG and real life”, and the demo we all saw didn’t fail to impress these ambitions on us. But what we didn’t see is how GPU-intensive this demo was – it turns out, not all that much.
We recently spoke to Epic Games and learnt a little more about making Unreal Engine 5. Vice president of engineering Nick Penwarden said, “I can say that the GPU time spent rendering geometry in our UE5 demo is similar to the geometry rendering budget for Fortnite running at 60fps on consoles.” Let’s give ourselves a moment for that to sink in.
Fortnite is an incredibly well-optimised game, all things considered. Along with other Esports titles like League of Legends and CS:GO, it’s one of those games that can be used as a litmus test for lower-specced gaming rigs. If you have an entry-level build, you should be able to play a game like Fortnite – you don’t need one of the very best graphics cards or the best gaming CPU around to play it.
That’s one thing that makes UE5 so exciting. If it’s true that running the UE5 demo only took the “geometry rendering budget” of Fortnite at 60fps on console, we might be in store for a game engine that really gives lower-end hardware something to work with. Sure, geometry rendering isn’t the be-all and end-all of GPU gruntwork, and the GPU isn’t the only component relevant to gaming performance, but geometry rendering certainly takes up a fair chunk of the graphics pipeline, and the GPU is the single most important component when it comes to gaming performance.
This is all in context of UE5’s ‘Nanite’, the engine’s geometry system that’s intended to help create photorealistic objects all the way up to the horizon. It does this by using millions or billions of pixel-sized, virtualised micropolygons to form the geometry of a game scene, but then downscales the detail to match the specific device it’s running on and the detail that’s required of different objects at any moment in time.
For example, as the player moves closer to an object it will stream more geometric polygons, increasing its detail – all this without the developer having to manually program objects to use LOD (level of detail) in this way.
Analysis: We’re moving towards software, rather than hardware, optimisation
It used to be the case that games were made, and then hardware would run it as well as it could. This is an oversimplification, of course, since developers would try to ensure games weren’t so detailed that they couldn’t run on mainstream PC or console hardware. And, of course, there were general optimisations made to increase performance, but for the most part, if you wanted to improve a game’s performance you had to pull back its graphical fidelity. It looks like this is changing.
We have Nvidia’s DLSS 2.0, a technology that uses an AI neural network to figure out what a game would look like at a super-high resolution while rendering it at a lower one. It gives the graphical accuracy of a high-res game while only requiring the performance of a lower resolution. We also have promises of next-gen’s Xbox Series X and PS5 storage technologies, and innovative ways of using this hardware to improve performance on the software level through texture streaming.
Now we have another example of how software is making the strides that was previously reserved for hardware. UE5 utilising this Nanite technology could mean games made using the engine are able to maintain a seriously impressive level of graphical fidelity without putting too much strain on the GPU and other components.
Things are no longer being left primarily down to the brute force of a GPU. They’re being handled by innovative software and developer solutions, ones that let the seriously gorgeous UE5 tech demo run its geometry rendering as well as Fortnite runs on console.