Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

They could have started with a more exposed computational approach.
The more exposed approach is already used in Lumen with much more limitations regarding the SDF building, supported geometry types, supported effects, etc.
It's very naive to assume that a more exposed computational approach would all of a sudden solve any problems.
 
There is firmware running on the RT cores working at a far more computational level than DXR, expose the language and problems will get solved.

Just like compute could solve rasterization problems better with nanite than all the fixed function crap developers are deemed capable of handling and just had to start plain ignore.
 
expose the language and problems will get solved
How when the problems with Nanite are not even related to the RT cores?

Just like compute could solve rasterization problems better with nanite than all the fixed function crap developers are deemed capable of handling and just had to start plain ignore.
I've been following REYES types of SW renders on GPU for a long time. IIRC first microtriangles SW raterizers started to appear 10 years ago in Fermi time frame and were already faster / competetive with HW rasterization, it took just 10 years and a few orders of magnitude of compute performance to bring it up into an engine with appropriate LOD system, GPU-driven rendering, etc.
Lets see whether other devs will move to the same systems, would overhaul their engines and content authoring pipelines, which is already highly doubtful and way more work in comparison with something like adding BVH.
So it's not like the exposed compute magically solves it all. Probably in 10 years it would be fast enough to do millions of refits, rebuilds and BVH patches on the fly without dedicated hardware or probably it won't be due to power/silicon scaling and other limitations.
 
Refits/rebuilds are an artifact of the specific way the AABB/tri intersection test fixed function blocks are used. But the RT cores are almost certainly flexible enough to let them be used in other ways.

Consider how much less useful tensor cores would be if they were exposed at the same level of abstraction DXR exposes the RT cores.
 
Now we really need to see some UE5 demos and games so we can figure out which architectures are going to crap out :D
Luckily for Nanite it is largely a theoretical problem at the moment. The current thing happens to work (and seemingly fairly robustly) on current hardware and drivers, but no one - us included - are entirely happy with it from a spec POV going forward. That's what I mean by the IHVs are going to get a bit stuck here though... if it works fine today and they release a new architecture/driver that breaks it all, I don't think gamers are going to be very sympathetic to an appeal to esoteric technical spec details.

Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?
I don't want to get too specific but broadly I don't want to imply there's one bad apple here or something. From issue to issue and time to time everyone plays the various cards they have in these conversations. You can probably guess in broad strokes which companies more actively take the strategy of following benchmarks and which try to push a bit of the bleeding edge, but there's a lot of grey area.

Like I said though, I'm happy to call out the fixation on certain benchmarks as something that does actively harm the industry progress here. I don't have an easy solution, but tech sites and reviewers benchmarking real games has helped on desktop. Mobile is still stuck in gfxbench mode unfortunately last I checked. And of course getting overly fixated on any specific game is not particularly useful, nor is benchmarking and declaring winners for gamer that are already well north of - for instance - 300fps on all targets.
 
Last edited:
Luckily for Nanite it is largely a theoretical problem at the moment. The current thing happens to work (and seemingly fairly robustly) on current hardware and drivers, but no one - us included - are entirely happy with it from a spec POV going forward. That's what I mean by the IHVs are going to get a bit stuck here though... if it works fine today and they release a new architecture/driver that breaks it all, I don't think gamers are going to be very sympathetic to an appeal to esoteric technical spec details.


I don't want to get too specific but broadly I don't want to imply there's one bad apple here or something. From issue to issue and time to time everyone plays the various cards they have in these conversations. You can probably guess in broad strokes which companies more actively take the strategy of following benchmarks and which try to push a bit of the bleeding edge, but there's a lot of grey area.

Like I said though, I'm happy to call out the fixation on certain benchmarks as something that does actively harm the industry progress here. I don't have an easy solution, but tech sites and reviewers benchmarking real games has helped on desktop. Mobile is still stuck in gfxbench mode unfortunately last I checked. And of course getting overly fixated on any specific game is not particularly useful, nor is benchmarking and declaring winners for gamer that are already well north of - for instance - 300fps on all targets.

Its not either going to change overnight since were practically stuck on todays hardware for atleast another 6 or 7 years.
 
You seriously think the cards have enough oomph for anything else (without dialing back the calendar some 10+ years on graphics quality)? o_O

What I think is that the cards have enough oomph to produce better results than we are seeing today from games where RT is an afterthought. You disagree?
 
I was going through my favourite videos on YouTube and found one from Euclidian, remember those guys?

The ones who claimed to have unlimited detail using point cloud data, even down to a grain of dirt?

Unlimited detail meet Nanite, Nanite meet Unlimited detail :LOL:
 
Last edited:
You can probably guess in broad strokes which companies more actively take the strategy of following benchmarks and which try to push a bit of the bleeding edge, but there's a lot of grey area.
Yeah, looking at the past 3 to 4 years specifically, I can guess that already.

Like I said though, I'm happy to call out the fixation on certain benchmarks as something that does actively harm the industry progress here.

This console generation seems particularly fixated on certain metrics that belonged to the old era, aside from IO of course.
 
Back
Top