Where does BVH construction happen, CPU or GPU? I thought it was CPU?
Afaik, 'continuous LOD' should mean a smooth LOD transition, e.g. like we have with tessellation shader with tessellation controlled by a floating point number, and results change gradually.Can you define “continuous”? DXR with RTX allows updating pointers to the correct mesh LOD during top level rebuilds which most games do every frame anyways. Nvidia has even implemented stochastic LOD but is limited to 8 levels of transitions between LODs.
Huh? AMD has both already. If they wanted their matrix cores in consumer GPUs they would be there.AMD is allegedly also working dedicated hw acceleration for ML and RT for future gpu's, so we should expect performance in-line with Intel and NV.
LOD was unsolved for so many decades. Seems we just have to accept one more, because the guys wich brought us HW RT did not really spend a lot of thought on it. Making trivial offline legacy methods realtime was their only idea. But at least we have 2000$ 2kg GPUs now. How great. : (
Seems they spend alot of thought on hw ray tracing though. Now your consoles are behind
Yeah, it's this reason why i still consider it. Maybe clusters of points would be easier to get the texture shading advantage as well. We can cache lighting simply on those points. And i found it can do analytical AA at negligible cost, so no need for TA in screenspace. Maybe we could even turn certain clusters into impostors, to make upscaling and frame interpolation crutches obsolete.I wonder if the stacking problems of building and maintaining a BVH structure that does everything developers would want it to do has made anyone reconsider looking into point splatting.
If you'd offer me both a 4090 and a PS5, that's like choosing between cancer or a wheelchair. But i'd take a dev kit.
I hate LOD pop in. Surprisingly one of my favourite UE5 features.LOD was unsolved for so many decades. Seems we just have to accept one more
You've overstated the accomplishment here. The results are great, but they are far from photoreal and in particular they don't scale well in quality, such as no skin meshing and geometry being largely static. Watch the vid at 6:45 where the T-Rex takes on a Toy-Story feel. Splatting also isn't the answer as here the results in Dreams are achieved by using SDFs instead of dense triangle geometry, the same representation used in Lumen's 'fast' (software) path. These SDFs are still rendered as triangles as well as splats because the original splatting-only strategy didn't work completely.While the challenges remaining to employing point splatting universally are still there, the advantages of being relatively simple, programmable, amenable to tracing, and lightning fast all recommend themselves as a different solution to banging heads against BVHs, IHVs, and etc. and trying for breakthroughs. Here, for one example, is a nigh photoreal recreation of part of the T-Rex scene from Jurassic Park, running with traced shadow and ambient occlusion, all runnable (I believe, I guess this is on PS5 but the level might work on a PS4 too) on quite low end hardware:
If you're sensual to this, i'd like to know an which kind of objects you notice it, beside foliage.I hate LOD pop in.
I have good reasons for them.You do have your preferences right?
You've overstated the accomplishment here. The results are great, but they are far from photoreal and in particular they don't scale well in quality, such as no skin meshing and geometry being largely static. Watch the vid at 6:45 where the T-Rex takes on a Toy-Story feel. Splatting also isn't the answer as here the results in Dreams are achieved by using SDFs instead of dense triangle geometry, the same representation used in Lumen's 'fast' (software) path. These SDFs are still rendered as triangles as well as splats because the original splatting-only strategy didn't work completely.
Regards working on PS4, AFAIK there is no PS5 enhanced Dreams and all Dreams levels run on all hardware, so I'd be surprised if this doesn't run on PS4. It's a fabulous creation, but it's not one in the eye against traditional HWRT. We explored a lot of these options in the debate before HWRT was introduced, with some promising compromises, but they haven't been able to be developed past the inherent problems of accuracy in spatial representation of the objects.
I remember Quake 1 did just that. : )One even includes point based representation of distant animated characters
I remember Quake 1 did just that. : )
Why do you think point sampels would not require something like a full scene bvh? If you want to get RT features, you need to organize them somehow, to have a way to query them spatially.
There's nothing wrong with BVH. It's the most flexible structure because it supports refitting animation, the tree hierarchy can represent LODs, and it works for large scens too.
The problems we now see in HW RT only come from having no access to its most important data sructure, ruling out the LOD advantage, and thus eventually also the support for large, arbitrary scenes.
Isn't that a BVH LOD? Have a high detail one for close and a distant one where very low representation is adequate? Once you get there, whether you rasterise triangle or point splat doesn't make a difference, seems to me. What I'm hearing is use a coarse SDF representation and trace against that at range, and use your detailed trimesh BVH for up-close details. That concept sounds reasonable, like 'mapmaps' for spatial representation.Sorry, "RT triangle BVH" problems with distance. You'd still need some sort of structure, a BVH without triangles might be fine, because suddenly the problems of point splatting not being able to represent high res surfaces go away if you're representing something far in the distance, as it's representable at very low res and memory without any problems anyway.