nothing controversial here for me, imho paradoxically rt can hinder graphic leap for xsx/ps5 generation as its hardware is simple still not enough to pushing nextgen assets, amount of details etc + rt, they are just enough for ps4 era graphics + higher resolution and some rt, and thats just waste of resources, happily ue5 has different approach
I don't know how you calculated this)
Ray Tracing has log(o) complexity because each next level of perfect BVH4 tree (for example) has (o)^4 elements, so to trace 1 000 000 triangles you need to go through at least 10 levels and tracing 16 000 triangles would require visiting at least 7 levels, with wider BVH the difference can be even lower.
Of course that's not always the case in reality since there is overlaping which depends on geometry topology, but the difference between tracing 1 000 000 and 16 000 triangles should not be 5.5 times.
Oops, i see math done with my head is wrong, as often. Sorry!
So i wrote little program to be sure. Hope there are no one off bugs left, but it confirms your numbers:
Code:
int initialTriangles = 1024*1024;
int targetLOD = 128*128*2;//initialTriangles;//
int subset = initialTriangles;
int branchingFactor = 4;
int nodes = 1;
int traversedLevels = -1;
int neededNodes = -1;
int level = 0;
SystemTools::Log("Tree branching factor: %i model triangles: %i needed triangles: %i\n", branchingFactor, initialTriangles, targetLOD);
for(;;)
{
int currentLOD = pow(branchingFactor, level);
SystemTools::Log("\nlevel: %i subset: %i nodes: %i currentLOD: %i",
level, subset, nodes, currentLOD);
level++;
if (traversedLevels==-1 && currentLOD>=targetLOD)
{
traversedLevels = level;
neededNodes = nodes;
SystemTools::Log(" <-");
}
subset /= branchingFactor;
if (subset==0)
break;
nodes += pow(branchingFactor, level);
}
SystemTools::Log("\n\ntraversed levels: %i performance factor: %f memory factor: %f\n\n",
traversedLevels, float(traversedLevels) / float (level), float(neededNodes) / float (nodes));
For BVH4 i get this (assuming a point query and no overlaps):
Still doing 81% of work for the example. I admit that's no big win. (Reason i did not wonder is i had my GI in mind, so the same geometry is traced from many locations and the advantage multiplies exponentially in practice for me.)
But memory is only 6%. So at least for that my math was right. (For binary tree i get 3% which is 32 times less)
The branching factor does not really matter for the relative LOD advantage (aside the quantization effect to the tree level integer number) - this only matters for the number of total tree levels we get. Tried 2,4,8,64 - always similar results for perf / memory advantages.
Here's what i get for the Nanite statue of 33M triangles:
Still 2/3 of work, but memory win is insane.
So we have to meet in the middle: I was wrong about 'big' perf win, but LOD is still essential because memory.
Thanks!
And that's exactly why I was talking here that it doesn't make any sense to go crazy about LODs with RT (i.e. trying to keep 1 pixel per triangle levels with some overengineered systems) and that classic discrete LODs will provide the same effect as Nanite till you keep triangle density for these discrete LODs at subpixel levels.
nothing controversial here for me, imho paradoxically rt can hinder graphic leap for xsx/ps5 generation as its hardware is simple still not enough to pushing nextgen assets, amount of details etc + rt, they are just enough for ps4 era graphics + higher resolution and some rt, and thats just waste of resources, happily ue5 has different approach
As a reminder - UE5s different approaches are not exactly tried and tested for production games, nor is it reaching for the sky for performance targets. Just as much I would say that almost all aspects of next gen development, all approaches, still have a good maturation process ahead of them. Just look at the strides real-time RT made in the last 2 years.
Thanks!
And that's exactly why I was talking here that it doesn't make any sense to go crazy about LODs with RT (i.e. trying to keep 1 pixel per triangle levels with some overengineered systems) and that classic discrete LODs will provide the same effect as Nanite till you keep triangle density for these discrete LODs at subpixel levels.
The advantages of LOD are independent of average geometry resolution we use in practice - we want them in any case, if our engine has support. UE5 only helps me to have an argument about DXR limitations i have criticized since its existence. It's an example, and likely we'll get more of these with time, not less.
If the engine has an advanced LOD system (UE5 yet being the only example i know about), it is not acceptable to give up on that advantage because RT API restrictions. Using constant LOD proxies brings back all the problems we have just solved. Notice the costs of building BVH are linear to memory costs, so at least there the performance win of having LOD is 'big'.
So my requests are not crazy in no way, while your defense of a broken status quo is, IMO. Assuming or goal is progress and not stagnation.
Thankfully there is now mesh shaders which every dev I have talked to has been very happy with.
As a reminder - UE5s different approaches are not exactly tried and tested for production games, nor is it reaching for the sky for performance targets. Just as much I would say that almost all aspects of next gen development, all approaches, still have a good maturation process ahead of them. Just look at the strides real-time RT made in the last 2 years.
Yes but mesh shader doesn't help with less than 4 pixels triangle because they use the hardware rasterizer until we have hardware rasterizer for different size and micropolygon friendly.
I am sure Nvidia and AMD prepare something for this at least it gives some interest to this AMD patent.
Oops, i see math done with my head is wrong, as often. Sorry!
So i wrote little program to be sure. Hope there are no one off bugs left, but it confirms your numbers:
Code:
int initialTriangles = 1024*1024;
int targetLOD = 128*128*2;//initialTriangles;//
int subset = initialTriangles;
int branchingFactor = 4;
int nodes = 1;
int traversedLevels = -1;
int neededNodes = -1;
int level = 0;
SystemTools::Log("Tree branching factor: %i model triangles: %i needed triangles: %i\n", branchingFactor, initialTriangles, targetLOD);
for(;;)
{
int currentLOD = pow(branchingFactor, level);
SystemTools::Log("\nlevel: %i subset: %i nodes: %i currentLOD: %i",
level, subset, nodes, currentLOD);
level++;
if (traversedLevels==-1 && currentLOD>=targetLOD)
{
traversedLevels = level;
neededNodes = nodes;
SystemTools::Log(" <-");
}
subset /= branchingFactor;
if (subset==0)
break;
nodes += pow(branchingFactor, level);
}
SystemTools::Log("\n\ntraversed levels: %i performance factor: %f memory factor: %f\n\n",
traversedLevels, float(traversedLevels) / float (level), float(neededNodes) / float (nodes));
For BVH4 i get this (assuming a point query and no overlaps):
Still doing 81% of work for the example. I admit that's no big win. (Reason i did not wonder is i had my GI in mind, so the same geometry is traced from many locations and the advantage multiplies exponentially in practice for me.)
But memory is only 6%. So at least for that my math was right. (For binary tree i get 3% which is 32 times less)
The branching factor does not really matter for the relative LOD advantage (aside the quantization effect to the tree level integer number) - this only matters for the number of total tree levels we get. Tried 2,4,8,64 - always similar results for perf / memory advantages.
Here's what i get for the Nanite statue of 33M triangles:
Still 2/3 of work, but memory win is insane.
So we have to meet in the middle: I was wrong about 'big' perf win, but LOD is still essential because memory.
18.2% and 33.3% are a gain not enormous but this is realtime everything help but memory is not free and memory gain is just crazy only 0.2% of memory usage for 33 millions polygons this is insane.
@OlegSH I am sure artists are happy to not use discrete LOD. Nanite helps productivity too.
@JoeJ Now than Epic rendering team complains about the DXR API. I think the API will evolve and help your project.
Xbox Series API* is more flexible on RT API than DXR but I don't know if they implement something useful for LOD on Xbox Series or PS5.
*MS told they can use offline BVH on Xbox Series meaning they can use offline generated BVH for static geometry. And only generate BVH for dynamic geometry.
I disagree regarding geometry, in most games the player doesn't really pay attention to geometry, racing games, first person shooters, third person shooters, real time strategies .. etc. The player simply won't stop to appreciate all those details, as they all fade into the background during frantic motion.
What Nanite is doing is no different to tessellating the ground, the tree barks, the terrain .. etc, it's advantage is mainly the continuous LOD, that's it.
We need complex geometry where it matters: on characters, on cars, on water, fluids, clothes, hair, volumetrics, smoke, sand, particles .. etc, not on environmental probs, heck most distant objects in racing games are cardboards, and no one cares.
So IMO, this fades in comparison to the need for a fully dynamic GI system in each game, which takes care of both lighting and shadows simultaneously to deliver lifelike visuals.
Yeah, but it's not really an API issue there, because there is just one GPU architecture. Also there is no such motivation to support PC and future console gen as well, like there is on XBox.
So i guess they can do anything, from figuring out AMDs BVH specs and generating themselves, up to replacing AMDs traversal code.
That's why i always expected Sony exclusives showing most interesting use of RT.
What Nanite is doing is no different to tessellating the ground, the tree barks, the terrain .. etc, it's advantage is mainly the continuous LOD, that's it.
There is a big difference for production: To make displacement mapping work on any model, we need a seamless UV parametrization first, because seams in UV space would cause cracks on the geometry otherwise.
Such parametrization is very difficult to solve. Only recently i see DCC tools beginning to have support. And even if we have that, we need to make the texture seamless as well, which also isn't trivial to automate.
I think that's the reason displacement mapping never became widely used except for hightmaps.
On importance lighting vs. geometry, surely the more important one to improve is where the issues are more obvious in comparison to the other.
But personally i think terrain is the geometry which had worst detail in games. Visible straight edges on rocks always annoyed me, while characters or cars seemed fine.
Terrain seems logical first choice to show off detail, because it has interesting details at all frequencies. And ofc. it's an obvious photogrammetry target.
I disagree regarding geometry, in most games the player doesn't really pay attention to geometry, racing games, first person shooters, third person shooters, real time strategies .. etc. The player simply won't stop to appreciate all those details, as they all fade into the background during frantic motion.
What Nanite is doing is no different to tessellating the ground, the tree barks, the terrain .. etc, it's advantage is mainly the continuous LOD, that's it.
We need complex geometry where it matters: on characters, on cars, on water, fluids, clothes, hair, volumetrics, smoke, sand, particles .. etc, not on environmental probs, heck most distant objects in racing games are cardboards, and no one cares.
So IMO, this fades in comparison to the need for a fully dynamic GI system in each game, which takes care of both lighting and shadows simultaneously to deliver lifelike visuals.
Dont really agree with this. Maybe it's just sort of a subjective preference or having a 'different eye' for how we judge visuals, but I tend to place a lot of importance on the quality of environments in games. And I certainly do stop and notice details in most games, and obviously not every title has you flinging the camera around at breakneck speeds for the entire experience.
And regardless of whether you are actively, consciously noticing details, your brain can still interpret 'this looks good' in content that is well done, and you'll usually do this *immediately* without having to actually process what you're seeing. Like, most gamers dont have the first clue about the technical side of graphics, but they will absolutely still be able to notice that Uncharted 4 looks better than Uncharted 2, even if they couldn't begin to explain why.
Some other posts here make good points - sometimes you think things are great until you experience better, and then your standards change. This is usually how it works, even if takes a little while for those new standards to really cement. I'd say the UE5 demos are pretty illuminating here. Even if you personally dont see the significance yet, I'd bet if you were to play around in games that had that level of geometric refinement for a few months, going back to some wobbly tessellated terrain meshes would be far more noticeable and far less acceptable.
But I would agree that we need improvements in the areas you mentioned as well. Any significant advancement in graphics will be held back if the other aspects of the visuals dont move forward with it, at least to some degree. Hence the complaints that while Metro Exodus EE might have amazing lighting, it still largely looks like a PS4-era game at the end of the day. Cuz it is. No one thing is going to make something 'next gen'. Even with Nanite, if you had the most detailed terrain ever seen in a game, but you stick a bunch of Witcher 3 trees in the scene, it's going to look jarring. But if you could have both amazing looking trees and amazing looking terrain, well then you're getting somewhere. Just saying, I dont understand the argument that this stuff doesn't matter that much. It all matters.
I also do not agree with gamers not appreciating geometry. The most recent HFW gameplay demo and most everyone's reactions to the high fidelity geometry sprinkled all around should be telling that when the geometry is there gamers tend to notice. It's actually the case of having faced with that much geometry for the first time on a console game, and HFW can spoil all our expectations going forward in that regard.
I disagree regarding geometry, in most games the player doesn't really pay attention to geometry, racing games, first person shooters, third person shooters, real time strategies .. etc. The player simply won't stop to appreciate all those details, as they all fade into the background during frantic motion.
What Nanite is doing is no different to tessellating the ground, the tree barks, the terrain .. etc, it's advantage is mainly the continuous LOD, that's it.
We need complex geometry where it matters: on characters, on cars, on water, fluids, clothes, hair, volumetrics, smoke, sand, particles .. etc, not on environmental probs, heck most distant objects in racing games are cardboards, and no one cares.
So IMO, this fades in comparison to the need for a fully dynamic GI system in each game, which takes care of both lighting and shadows simultaneously to deliver lifelike visuals.
This is not lifelike because geometry cast shadows and it means details not casting shadows with normal maps. There is a reason offline rendering never used normal maps.
Imo before silhouette problem this is the main advantage of geometry visually giving" volume" to the visual casting shadows. With normal maps, there is the impression something is missing and everything is flat because the shadow of details is missing.
For photorealism you need the full pakcage geometry, pathtracing and PBR and this is the reason offline rendering uses this three element.
I like this image because it reminds UE5 demo. Everything is there. geometry, pathtracing and PBR.
EDIT: Unreal Engine 5 is probably the best remaster/remake engine. Take original zbrush assets of a game for create a remaster.
But personally i think terrain is the geometry which had worst detail in games. Visible straight edges on rocks always annoyed me, while characters or cars seemed fine.
There is a reason Epic chose rocky mountains and terrain to showcase their Nanite everytime, they are literally the best showcase for the tech. But game worlds are much more variable than this, cityscapes, forests, sandy beaches, grassy hills, mundane streets .. etc. In all of these the benefits of Nanite will be less obvious.
My argument is that we prioritize the areas I mentioned first, crank up geometry to 11 on characters, cars, heroes, and main objects as much as you want, these are things that occupy the screen 100% of the time and are always noticeable.
But please don't go increasing the environmental geometry details while leaving main objects the same, or while degrading the quality of the lighting with baked static GI.
This is not lifelike because geometry cast shadows and it means details not casting shadows with normal maps. There is a reason offline rendering never used normal maps.
Shadow maps suck at casting shadows even for the lackluster geometry we have now, that's why Ray Traced shadows offer more shadows for that geometry without even increasing any polygon count.
See games like Shadow of Tomb Raider and Call of Duty Cold War for examples.
I disagree regarding geometry, in most games the player doesn't really pay attention to geometry, racing games, first person shooters, third person shooters, real time strategies .. etc. The player simply won't stop to appreciate all those details, as they all fade into the background during frantic motion.
No way dude. Sure players don't say: "wow, that's a lot of geometry!" unless it's fed to them by marketing, but they definitely notice more complex scenes, more overlapping detail, more sharp shadows on more elements, more complex silhouettes, smoother surfaces (which mean, not just 'i don't see the jagged edge', but also, smoother lighting that doesn't fall apart when a normal map mips, more correct shadows, etc)
There is a reason Epic chose rocky mountains and terrain to showcase their Nanite everytime, they are literally the best showcase for the tech. But game worlds are much more variable than this, cityscapes, forests, sandy beaches, grassy hills, mundane streets .. etc. In all of these the benefits of Nanite will be less obvious.
Also no way. Epic chose rocky mountains because they're cheap to produce and show off megascans. There's more than one business interest at play here. Wait for real games to use it, it'll be a big deal. I think people just don't realize how serious the compromises artists make are.
Shadow maps suck at casting shadows even for the lackluster geometry we have now, that's why Ray Traced shadows offer more shadows for that geometry without even increasing any polygon count.
See games like Shadow of Tomb Raider and Call of Duty Cold War for examples.
You won't get me arguing that RT shadows aren't the best way forward, but the VSMs in ue5 are more than good enough to do the job of revealing detail accurately.