Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

It sure was. NV/AMD (and even others like matrox etc) had their tech demos that blew minds. Today those things arent a thing anymore somehow. Even console manufacturers had their tech demos, PS2 had find my own way for example and many more.

Changes in terms of how developer relations are handled might be a factor here. There's less need for those showcases if you have a few developers with whom you have relations with already ready to take advantage and showcase hardware. AMD and Nvidia both have actual games now ready to go showcasing each new generation shortly after launch and even as part of launch presentations.

There's also the general diminishing returns in terms of the actual "wow" factor in each new generation compared to way back then. Ray tracing is an example of this. It's from a technological stand point the arguably the biggest shift since moving to programmable shaders well over a decade ago now, yet the general "wow" factor with demonstrations is rather mixed. Aside from that there's even less of a differentiator between previous generations showcase. For instance the 6900xt might be much faster than the 5700xt but other than higher resolutions/frame rates there isn't really anything in terms of actual fidelity improvements you can show case against the 5700xt, outside of well the ray tracing.

Didnt nvidia release marbles?

Nvidia still has been releasing tech demos. There was Reflections Demo (storm troopers) for Turing to show case ray tracing. Apollo for Maxell to showcase VXGI.
 
  • Like
Reactions: snc
Nvidia still has been releasing tech demos. There was Reflections Demo (storm troopers) for Turing to show case ray tracing. Apollo for Maxell to showcase VXGI.
Yeah, I was going to say there were ray tracing demos from both nVidia and AMD, and Intel had a recent graphics demo showcasing some XE stuff. I do miss the Ruby demos ATi used to make, though. They had tiny stories in action heavy scenes and I though they were much more interesting than "look at this lizard walk on this branch" type demos.
 
One of the problems with RT is that devs got really good at SSR , Cube Maps etc... that fake lighting that the RT doesn't wow as much as it could have. Let's face it: FH5 looks amazing with the lighting fakery they used. RT is awesome, but not going to be night and day for most people, including me. I know there are people here who get very exited for RT because they can really tell the difference, but I believe that most people can barely see it. That's why they keep using it mainly for reflections in Spider-Man MM and the like as it's what's going to be noticed the most by your average gamer.
 
One of the problems with RT is that devs got really good at SSR , Cube Maps etc... that fake lighting that the RT doesn't wow as much as it could have. Let's face it: FH5 looks amazing with the lighting fakery they used. RT is awesome, but not going to be night and day for most people, including me. I know there are people here who get very exited for RT because they can really tell the difference, but I believe that most people can barely see it. That's why they keep using it mainly for reflections in Spider-Man MM and the like as it's what's going to be noticed the most by your average gamer.

The main problem with RT is that games are currently designed with the limitations of rasterization in mind. Basically you don't know what you're missing. I think it's really easy for most people to see the difference between an offline path traced render (the future) and current game graphics.
 
If performance does end up being competitive in Nanite, I am anticipating that other hardware vendors will naturally stick to their own schedules rather changing it which can mean that these discussions could be stalled again.
Right and indeed if that's the model, you will always be 2-3 years too late. To be clear, I have no problem with people just doing the "wait and see before I build hardware" approach. As a business strategy this is totally fine; Intel did this for quite some time very intentionally and mobile vendors do it exclusively (can make arguments either way on Apple). But by taking this approach you don't really get a vote on new stuff because your vote is just always going to be "I don't want anything to change because that's what keeps me the most competitive with the folks actually pushing new hardware/features".

Almost every new feature has to be built before there are benchmarks in place for obvious reasons. Ray tracing, bindless, conservative raster, ROVs, etc. all have to be motivated based on some amount of knowledge or deliberate intention to build future rendering techniques. This was basically my job when I was at Intel - understand what is likely to be coming down the line and align the hardware/software as much as possible in advance. There's obviously a push/pull from both sides and there can never be a 100% hit rate (see geometry shaders and arguably tessellation).

If a company is content to just wait and see on all this stuff before they build it, so be it, but then we're not really even in the same discussion. You can certainly interpret that as "ISVs don't care about mobile", but the reality is if you want to be taken seriously with new technology you have to be willing to be at least a bit proactive. Reactive is fine, but then you're stuck without a vote. See nobody caring about whatever Metal does 5+ years too late on tessellation, compute, shading languages, etc.
 
The main problem with RT is that games are currently designed with the limitations of rasterization in mind. Basically you don't know what you're missing. I think it's really easy for most people to see the difference between an offline path traced render (the future) and current game graphics.
You seriously think the cards have enough oomph for anything else (without dialing back the calendar some 10+ years on graphics quality)? o_O
 
Right and indeed if that's the model, you will always be 2-3 years too late. To be clear, I have no problem with people just doing the "wait and see before I build hardware" approach. As a business strategy this is totally fine; Intel did this for quite some time very intentionally and mobile vendors do it exclusively (can make arguments either way on Apple). But by taking this approach you don't really get a vote on new stuff because your vote is just always going to be "I don't want anything to change because that's what keeps me the most competitive with the folks actually pushing new hardware/features"./QUOTE]

Almost every new feature has to be built before there are benchmarks in place for obvious reasons. Ray tracing, bindless, conservative raster, ROVs, etc. all have to be motivated based on some amount of knowledge or deliberate intention to build future rendering techniques. This was basically my job when I was at Intel - understand what is likely to be coming down the line and align the hardware/software as much as possible in advance. There's obviously a push/pull from both sides and there can never be a 100% hit rate (see geometry shaders and arguably tessellation).

Predicting the future is a perilous move especially since examples you mentioned like geometry shaders, tessellation, or ROVs turned out to be duds. Focusing on low hanging fruits like more explicit bindless functionality, host-visible device memory heaps, or extending ray tracing in the near future are relatively benign subjects. If you're asking for highly contentious features like forward progress guarantees then you can either accept to give up on the idea or die on the hill by relying on the only implementation from a single vendor. There's a fine line between being bold and being absurd when you're projecting the future.

If a company is content to just wait and see on all this stuff before they build it, so be it, but then we're not really even in the same discussion. You can certainly interpret that as "ISVs don't care about mobile", but the reality is if you want to be taken seriously with new technology you have to be willing to be at least a bit proactive. Reactive is fine, but then you're stuck without a vote. See nobody caring about whatever Metal does 5+ years too late on tessellation, compute, shading languages, etc.

You also need consent from more than two vendors if you want to depend on this new technology in a lot of cases and we're all well aware that Apple doesn't care about seeking approval from the rest of the industry either. On the other hand If you truly care about portability then you will continually seek authorization from the industry.
 
Predicting the future is a perilous move especially since examples you mentioned like geometry shaders, tessellation, or ROVs turned out to be duds.
I wouldnt call tessellation a dud -- sure its kinda bad, but its been present in just about every game for a long time. A feature doesnt have to be a generation defining game changer to be useful. And the stuff andrew actually listed, like bindless and conservative rasterization are huge, even if they don't come with glitzy tech demos the average gamer can understand at a glance. Regardless, im going to bet on the industry (including big players like epic) continuing to make steady good decisions and progress -- some things will be duds, some promising approaches will never get hardware support, others will get a usable but not ideal solution, but on average there's a good history of steady progress.
 
Watch Dogs, The Ascent and Cyberpunk all require sub 1440p to achieve 60 fps. Even at 1080p a 3090 cant maintain 60 in Watch Dogs and Cyberpunk actually. What games do you think have visuals that approach
Cyberpunk is a complex open world game, with complex systems and great visuals, it's rasterization pass is already heavy as it is, the 3090 barely does 1440p60 in busy scenes, and 4K60 is completely out of reach even for the 3090, so it's natural that when the game adds 4 heavy RT effects that it would drop below 60 even on the 3090. The UE5 demo is just some static scenes with next to no animation or AI or any complex simulation.

Watch Dogs is the same, it is a game with a big expensive worlds with AI and complex systems on top, and it's doing extensive RT reflections on everything (dozens of surfaces) , the UE5 demo can't even do any that yet, as Lumen can't do reflections, and the demos were devoid of any, compared to the empty UE5 demo, the world of Watch Dogs is rendering a much much more stuff.

The Ascent is another game with AI and a lot of details in big levels, with multiple RT effects and dozens of RT reflections in any scene, it's also game from a small developer, so it's not optimized well.

If the UE5 demo is running barely 1080p60 on a 3090 with no AI, big worlds, many objects on screen, animation .. etc, imagine the performance when that happens.
 
If the UE5 demo is running barely 1080p60 on a 3090 with no AI, big worlds, many objects on screen, animation .. etc, imagine the performance when that happens.
The test scenes have comical amounts of objects on screen -- a regular shipped game would have *less* (at least of the bad, expensive case) not more. AI, animation, etc, have less to do with graphics performance than you seem to imagine. Ue5 gams will almost certainly be bound by lumen (not nanite) performance to that 1080p60 or whatever, and the rest of the game will have plenty of headroom.
 
Cyberpunk is a complex open world game, with complex systems and great visuals, it's rasterization pass is already heavy as it is, the 3090 barely does 1440p60 in busy scenes, and 4K60 is completely out of reach even for the 3090, so it's natural that when the game adds 4 heavy RT effects that it would drop below 60 even on the 3090. The UE5 demo is just some static scenes with next to no animation or AI or any complex simulation.

Watch Dogs is the same, it is a game with a big expensive worlds with AI and complex systems on top, and it's doing extensive RT reflections on everything (dozens of surfaces) , the UE5 demo can't even do any that yet, as Lumen can't do reflections, and the demos were devoid of any, compared to the empty UE5 demo, the world of Watch Dogs is rendering a much much more stuff.

The Ascent is another game with AI and a lot of details in big levels, with multiple RT effects and dozens of RT reflections in any scene, it's also game from a small developer, so it's not optimized well.

If the UE5 demo is running barely 1080p60 on a 3090 with no AI, big worlds, many objects on screen, animation .. etc, imagine the performance when that happens.
UE5 performance looks much more justified when looking at the output than the RTX games IMO. I expect that would be the common opinion amongst most gamers. I personally think all 3 of those titles offer a poor visual to performance ratio.
 
Focusing on low hanging fruits like more explicit bindless functionality, host-visible device memory heaps, or extending ray tracing in the near future are relatively benign subjects.
I don't know if you were involved in the early discussions, but let me assure you that bindless and especially raytracing all started life as *highly* contentions features, as you might imagine. Much more so than anything related to compute execution model to be honest.

As I stated I have no problem with companies (and specs...) that just want to play follow the leader and refine a little bit around the edges. That is valuable in its own right. But that is really an orthogonal discussion to the companies that are trying to really push the state of the art, and the former companies trying to hold the latter back is frankly a little bit silly and ultimately a self-serving business strategy that has nothing to do with what's best for the "industry".

Anyways we've gotten far enough afield from the original topic. Maybe I should have listened to the smarter part of myself and not done the public soapbox, but suffice it to say I have little sympathy for companies who are going to be forced to deal with a bunch of this stuff in the next few years as everyone had *ample* warning this time around.
 
The main problem with RT is that games are currently designed with the limitations of rasterization in mind.

That goes two ways, RTX/DXR are also designed to improve games as currently designed (and for streaming open world games not even that, instead a significant step back).
 
I have little sympathy for companies who are going to be forced to deal with a bunch of this stuff in the next few years as everyone had *ample* warning this time around.

Now we really need to see some UE5 demos and games so we can figure out which architectures are going to crap out :D

That goes two ways, RTX/DXR are also designed to improve games as currently designed (and for streaming open world games not even that, instead a significant step back).

Due to lack of BVH LOD/streaming? Maybe but you gotta start somewhere.
 
orthogonal discussion to the companies that are trying to really push the state of the art, and the former companies trying to hold the latter back is frankly a little bit silly and ultimately a self-serving business strategy that has nothing to do with what's best for the "industry".
Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?
 
Anyways we've gotten far enough afield from the original topic. Maybe I should have listened to the smarter part of myself and not done the public soapbox, but suffice it to say I have little sympathy for companies who are going to be forced to deal with a bunch of this stuff in the next few years as everyone had *ample* warning this time around.

It's fine to show little sympathy for them but we still have to pretend that these companies matter if we want portable standards or they'll just go off totally doing their own thing which would be worse as opposed to what we have now I think ...

Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?

Every desktop vendor has at least supported dynamic dispatch in the last five years but there's still some caveats so it's hard to point fingers in this case. If control flow is uniform in an unstructured CFG, dynamic dispatch could be reasonably implemented on current compilers since masking wouldn't be necessary. The reason why we use structured CFGs in the first place is to make it easy for compilers to emit instructions for masking in the presence of divergent branching. We could have dynamic dispatch but it's probably too much to ask for vendors to handle divergent and unstructured control flow in their compilers as well. Also subgroup operations might be undefined with unstructured control flow as well.

Even AMD and Intel don't implement callable shaders in ray tracing with dynamic dispatch. AMD implements a huge switch statement for callable shaders while Intel implements it in HW via Bindless Thread Dispatch mode of which I doubt it can be used in either compute shaders or graphics shaders. One has to wonder what those two would stand to gain from using dynamic dispatch with ray tracing ?

As for dynamic parallelism, AMD only cared about implementing it in OpenCL 2.0 to meet conformance targets so it's not even implemented in their officially supported HIP API either. Nvidia won't implement OpenCL 2.0 at all. Best case scenario is if Microsoft unilaterally forces all vendors to expose this feature in Direct3D and are they even going to contemplate making this move ?

A more complete implementation of cooperative groups would require forward progress guarantees. Currently, neither AMD nor Intel make promises about forward guarantees on their HW and here's what I picked up very recently from an insider at Microsoft had to say about the subject ...

Jesse Natalie said:
Yeah we're having active discussions with GPU vendors and other developers about similar topics, and it's a thorny issue. I'd suggest not trying to bake in any assumptions about forward progress.

I maybe a pessimist but that doesn't mean I have the wrong reasons to be skeptical as to how things will play out in the future. Maybe Andrew knows something more from the other vendors that I don't which is why I conjecture that he may be more confident in the future than I am ...
 
Back
Top