And how do you think they got their hands on the GPU? Developers tenth their size get access to prototype GPUs but if anyone will have some sort of partnership with nv, it's gonna be CDP. But in general I think people are reading too much into this. Tons of trailers these days start with a disclaimer "rendered using engine X on GPU Y". It just so happens that this probably is run on RTX 50 series prototype but they can't call it that.or: using an unannounced GPU for marketing purposes to get those sweet sweet nv sponsorship bucks
everyone's UE5 should be custom for a studio the size of CDPR. They have the skill sets to make the modifications to get the performance they need out of the engine. Smaller developers that are strictly looking to pump content will have a greater issue with that.CDPR said they are using a special custom UE5 engine, they are probably using some NVIDIA features in it as well, like the NVRTX branch.
yeah exactly, so it makes total sense on the economic side even if it isnt "needed" on the technical sideAnd how do you think they got their hands on the GPU? Developers tenth their size get access to prototype GPUs but if anyone will have some sort of partnership with nv, it's gonna be CDP. But in general I think people are reading too much into this. Tons of trailers these days start with a disclaimer "rendered using engine X on GPU Y". It just so happens that this probably is run on RTX 50 series prototype but they can't call it that.
hope so. You can do many things with UE5, it's an impressive engine, versatility wise, but they pay a price for such versatility and the engine needs some serious fixes, imho.CDPR said they are using a special custom UE5 engine, they are probably using some NVIDIA features in it as well, like the NVRTX branch.
Even some of his points about Nanite I can somewhat relate too as I've though similar things myself.
Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times.
While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs).
It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render.
However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We weren't.
I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.
nonsense video. There are a lot of half-understood things here, and all of the "better" alternatives and "missing features" have significant tradeoffs. Particularly the magic ai surface reduction/mesh generation concept proposed as a silver bullet is pretty silly -- we can do everything he suggests fairly easily with existing tech without some AI solution, generating new batched low res geometry for the entire scene and baking maps to render with some kind of expensive parralax mapping is a well established existing approach with its own severe perf tradeoffs (memory, difficulty culling, balance of draw calls vs shader complexity, slow iteration time.)