Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
What happens now is entirely irrelevant, because we are still looking at cross generation games that are do not use the crucial DX12 Ultimate features such as mesh shading and Sampler Feedback. Also, some next gen games like Avatar will use Raytracing only as their lighting solution, so HW-RT is of course automatically much faster than software, meaning any Turing card has an instant advantage in these new games.

Once cross gen is over, RDNA1 will age like milk. Consoles won't help it here, as those also have HW-RT and DX12 Ultimate.
It isn’t irrelevant because currently 5700xt owners are enjoying the benefits. It is a relevant data point that being on both consoles benefits AMD in the PC space. It’s also looking like it will be years before the transition you mention will occur.
 
Trying to spin my argument as not based on reality is disingenuous at best and malicious at worst. As GPUs have gotten more powerful, we almost always move away from crutch rendering techniques created to bridge the inadequate power of the hardware. We move to techniques with less compromise than before. We moved from bilinear/trilinear filtering to anisotropic as example and the same vain, the industry will move away from DLSS when the time is right.
I don't think this is true which is largely what Function is trying to point out here. The industry moves away from things that no longer have a place in rendering, we don't move away from them because we necessarily have more power. If that is so, SSAA would have dominated but it's not. The reason we don't use SSAA despite how old it is, is that power could be better used elsewhere instead of supersampling down. In the same method where we are hitting a cross over point where the cost to increase graphical fidelity using traditional T&L methods is now like more expensive than going the RT route, which is why there is now a emergence of RT accelerators. The cost of the compromise got too high, so we are instead just moving to incorporate ray tracing now.

From that perspective the cost of running DLSS, or CBR, or any other up sampling technique should not cost more than native, so these up sampling techniques by default should not go away unless there is a more superior upsampling technique. So even if you have enough power to run 4K native; up sampling techniques can be made to render 8K and 16K respectively. As long as we have a drive to increase the resolution of screens, while having a physical cap on power output, up-sampling techniques are unlikely to go away, if anything are likely to become more abundant.

Technology advances in leaps followed by a period of stagnation and then the process repeats itself. In 100 years, the use of DLSS and many rendering techniques used today will be non-existent due to technological advances. We always move to techniques with less compromise as technology advances.

With regards to the prevalence of DLSS, as long as it’s not open source, it’s period of relevance is drastically limited. It’ll eventually be replaced by an open source equivalent at some point and we’re already seeing evidence of that with intels proposed solution. I’m don’t think DLSS is useless. It’s quite useful but, it has its very evident flaws. I guess I take strong offence to people parading around spewing out the marketing speak of their favourite hardware manufacturer. I’m not saying you’re doing that but certain people here are quite guilty of it.

Perhaps, given a long enough time this may be true given a framework like DirectML is present for this. But the effort is not so easily replicated. Nvidia can continually improve the performance of DLSS, as they have been, much faster than developers will be able to develop newer non-ML based up-sampling techniques. And if other companies are competing in ML based up-sampling and AA, then there will be a variety of competition on models anyway. Think on how long we've been iterating on TAA, TAAU, and MSAA now, compared to how quickly DLSS is iterating in such a short time. MSAA is still around because some games still require forward rendering for instance with minimal blur.

The power of DLSS is not in the hardware, on the contrary, the power is in the software behind DLSS itself. We are unlikely to see Nvidia let DLSS go, as that product and other ML based graphical solutions are likely to be worth more than the silicon they produce as time goes on.

Each technique will find it's place. Calling it a crutch is perhaps quite crude to what it is. It's a tool much more than it is a crutch. Besides if up sampling is really not your thing, Nvidia still offers DLAA for those seeking a different anti-aliasing approach.
 
Last edited:
PhysX add-in pci cards left the scene along time ago, but the technology did not, NV incorporated it into their GPU's which is a much better solution anyway.

The add-in cards are too slow too, the ones I have are basically useless in games.

Well, far from dead as there is a new PhysX version with the latest driver. ;)

It's very dead, there's not been a decent game release with PhysX for years now.

Shame as it's by far my most favorite tech to see in games, Cryostasis being one of the best.
 
It's very dead, there's not been a decent game release with PhysX for years now.
Shame as it's by far my most favorite tech to see in games, Cryostasis being one of the best.
There have been a few games in 2020 that support Physx (Mount & Blade II, Metro Exodus), though many developers now are using the UnReal engine and imagine it will be used less.
 
Drivers have nothing to do with it, there's not been a AAA game released with PhysX for years now, it's dead.

Your take was that things that NV adopts become abandoned or die quickly somehow. In the case of PhysX thats not really the case seeing its history and todays status. Turning this around, how much was the ID buffer that was mostly Sony's creation used? What about some other features the company has added to their systems but never saw much use outside of some AAA exclusives? How does that compare to something like PhysX? What about ray tracing and ML reconstruction tech?
 
I don't think this is true which is largely what Function is trying to point out here. The industry moves away from things that no longer have a place in rendering, we don't move away from them because we necessarily have more power. If that is so, SSAA would have dominated but it's not. The reason we don't use SSAA despite how old it is, is that power could be better used elsewhere instead of supersampling down. In the same method where we are hitting a cross over point where the cost to increase graphical fidelity using traditional T&L methods is now like more expensive than going the RT route, which is why there is now a emergence of RT accelerators. The cost of the compromise got too high, so we are instead just moving to incorporate ray tracing now.

From that perspective the cost of running DLSS, or CBR, or any other up sampling technique should not cost more than native, so these up sampling techniques by default should not go away unless there is a more superior upsampling technique. So even if you have enough power to run 4K native; up sampling techniques can be made to render 8K and 16K respectively. As long as we have a drive to increase the resolution of screens, while having a physical cap on power output, up-sampling techniques are unlikely to go away, if anything are likely to become more abundant.

Indeed! You don't discard techniques because you have more power, you discard them because there are better alternative unlocked through the advancement of processors, procedures or algorithms.

If DLSS offers better performance and IQ than native (it does offer the first, and can offer the second), then it will still do that on a more powerful GPU. And ML upscaling / AA is still in its relative infancy. ML is an entire field, not just an "Nvidia GPU thing". PhysX might have been somewhat sidelined (still used tho) but "physics acceleration" either through GPU compute or CPU vector extensions certainly hasn't been! And neither will machine learning and image processing based on ML models (that undersells it, you could use colour + depth + motion vectors + potentially all the above from previous frames + god knows what, it's all potentially grist for the mill).

Ironically, the "crutch" of DLSS has actually been enabled by advances in both processors and applied concepts in information processing. It can allow higher performance and, depending on how you weight things, a more accurate image than a natively rendered image. ML SS isn't the "crutch" compared to native, it's potentially the new improved crutch that you move onto after the crutch of native rendering and less complex forms of temporal AA. IMO, of course.

We moved from bilinear/trilinear filtering to anisotropic as example and the same vain, the industry will move away from DLSS when the time is right.

... so you're saying that the industry will move onto an improved and extended version of DLSS, that runs on modified versions of the current processing units in DLSS accelerated GPUs?

I mean, you know how aniso is done, right?

Anyway, you’ve made it awful clear that you have no interest in discussing actual rendering. Instead, you’ve chosen to dwell on semantics and fallacious arguments. Please refrain from quoting me in the future as I have no interests in intellectually dishonest discussions.

Your dramatics and intellectual purity are duly noted.
 
There have been a few games in 2020 that support Physx (Mount & Blade II, Metro Exodus), though many developers now are using the UnReal engine and imagine it will be used less.

Correct me if I am wrong but isn't Physx the default physics engine used by UE up to the last iteration of UE 4?
 
Your take was that things that NV adopts become abandoned or die quickly somehow. In the case of PhysX thats not really the case seeing its history and todays status. Turning this around, how much was the ID buffer that was mostly Sony's creation used? What about some other features the company has added to their systems but never saw much use outside of some AAA exclusives? How does that compare to something like PhysX? What about ray tracing and ML reconstruction tech?

I said propriety Nvidia technology........ Name me a single piece of Nvidia only tech one that's become standard in games.

ID buffer is an irrelevant discussion unless you have usage statistics.

RT and ML reconstruction are not propriety Nvidia technology so are again, irrelevant to this discussion.
 
I said propriety Nvidia technology........ Name me a single piece of Nvidia only tech one that's become standard in games.

ID buffer is an irrelevant discussion unless you have usage statistics.

RT and ML reconstruction are not propriety Nvidia technology so are again, irrelevant to this discussion.

Getting there first and popularizing gpu physics, bvh acceleration hardware, etc, still counts. Nobody buys a 3080 thinking theyll be playing 3080 exclusive games for 15 years.
 
Getting there first and popularizing gpu physics, bvh acceleration hardware, etc, still counts. Nobody buys a 3080 thinking theyll be playing 3080 exclusive games for 15 years.

'Getting there first' has nothing to do with what I'm talking about...... But I'm done derailing the thread now.
 
Not to mention there's never been a Nvidia proprietary technology that's ever become a standard, they all die out (R.I.P PhysX :cry:) so claiming DLSS as an advantage for the RTX series is a stretch.

Once a solution like Intel's XeSS is available and delivers good results on all hardware DLSS will simply die out.
I just don't see that happening. Not for a long time at least.
The plumbing for ML based up-sampling/AA is not standard. You can't just dump it into the rendering pipeline. Each model may require something different to work, DLSS for instance requires aliased untouched frames, motion vectors etc, and other models may choose to separate SS from AA, making the pipeline awkward.

These models will take some time before they come to a common ground of where to deploy, I think currently in its infancy, we will see a great deal of many different styles of ML implementations before winners are chosen and they settle on a common foundation to be applied to engines.

Going open source will help move deep learning super sampling forward without a doubt. Knowing that more than a single vendor could take advantage of it may get more developers on board to support it. But that doesn't necessarily stop nvidia from doing the same thing. DLSS and DLAA could also be open source as well one day, it's just that right now they choose not to. Eventually market pressure may force to them to do it, as they want to continue to be a leader in this field.

I don't know. At first guess, I suspect the algorithms that are easiest to plumb into engine with very good results are likely to be the ones to be the most successful in terms of adoption.
 
Last edited:
Correct me if I am wrong but isn't Physx the default physics engine used by UE up to the last iteration of UE 4?

Yes and it's even deprecated in UE too currently so eventually it's going to be removed in a future release as well. Unity Technologies also wants to move away from PhysX too during their engine rebuild ...

Moving away from external dependencies such as PhysX is just a natural evolution of long-term software projects like game engines. In-house solutions are created because it not only allows the developers to regain more control over their own projects but it is also more desirable from a maintenance standpoint as well. Epic Games and Unity Technology don't like the fact that they have very little control over PhysX and they likely don't trust Nvidia to fix some of the issues with reasonable thought or add the necessary features they want so PhysX is falling out use as time goes on ...

PhysX might now still be fine to use for small or short-term projects since there may not be as many complex interactions going on in that case but if it ever does become unmaintained in the future then it's not a great idea at all to rely on a dead project that won't get any bug fixes or new features because otherwise ugly workarounds or crazy hacks are going to be introduced and necessary just to improve/extend/advance your own projects which aren't ideal. Omniverse is Nvidia's half way attempt to compete against the Unreal Engine or other content creation tools because they feel insecure that their investment is going to become obsolete ...

Similarly, DLSS can also be said to be another "external dependency that developers have no control over" but it's not anywhere near as invasive to the engine code as PhysX so if developers are experiencing issues or unexpected outcomes then a realistic backup plan is that they can either just disable DLSS or simply remove it from the engine ...
 
Ya it’s definitely in the best interest of developers to not have to rely on anything owned by Nvidia. How long did Nvidia refuse to make Physx multi-threaded to artificially improve their cards positioning in benchmarks.
 
The industry moves away from things that no longer have a place in rendering, we don't move away from them because we necessarily have more power.

While this is true in a sense, in my opinion, things are deemed to no longer have a place due to their crude approach to solving the problem or their less than desirable results. Having more computational power allows for the exploration of differing approaches and solutions.

If that is so, SSAA would have dominated but it's not. The reason we don't use SSAA despite how old it is, is that power could be better used elsewhere instead of supersampling down. In the same method where we are hitting a cross over point where the cost to increase graphical fidelity using traditional T&L methods is now like more expensive than going the RT route, which is why there is now a emergence of RT accelerators. The cost of the compromise got too high, so we are instead just moving to incorporate ray tracing now.

With regards to SSAA, that is perhaps a poor example to get your point across. SSAA never made sense in the first place.

You’ll also need to provide further clarification as to what you mean by the cost got to high? If you’re referring to computational cost, I’d argue against that line of thinking as the driving factor for the adoption of Rt accelerators. In my view, the real driving factor is the lack of scalability in man hours.

From that perspective the cost of running DLSS, or CBR, or any other up sampling technique should not cost more than native, so these up sampling techniques by default should not go away unless there is a more superior upsampling technique. So even if you have enough power to run 4K native; up sampling techniques can be made to render 8K and 16K respectively. As long as we have a drive to increase the resolution of screens, while having a physical cap on power output, up-sampling techniques are unlikely to go away, if anything are likely to become more abundant.
I actually believe that these upsampling techniques will go away with time. They might be replaced with more efficient techniques or we might get to a point where there’s sufficient power that they’re not needed. I don’t share the belief that power will be used to drive increased screen resolutions. Consumers spending patterns on display devices suggests that we’re reaching a point where the resolution is good enough. I expect to see extremely poor adoption of 8k and a paradigm shift in display devices.

Perhaps, given a long enough time this may be true given a framework like DirectML is present for this. But the effort is not so easily replicated. Nvidia can continually improve the performance of DLSS, as they have been, much faster than developers will be able to develop newer non-ML based up-sampling techniques. And if other companies are competing in ML based up-sampling and AA, then there will be a variety of competition on models anyway. Think on how long we've been iterating on TAA, TAAU, and MSAA now, compared to how quickly DLSS is iterating in such a short time. MSAA is still around because some games still require forward rendering for instance with minimal blur.

The power of DLSS is not in the hardware, on the contrary, the power is in the software behind DLSS itself. We are unlikely to see Nvidia let DLSS go, as that product and other ML based graphical solutions are likely to be worth more than the silicon they produce as time goes on.

In my opinion, it doesn’t really matter what nvidia does and whether they choose to let go of DLSS. The industry has shown time and time again that it prefers open source technology to proprietary software. Once there are open sources equivalents on the market, even if they’re are not as good, they’ll see heavy adoption. None of these companies are stupid. They all understand that Nvidia is trying to create a dependency on their technology stack so they can monetize out the wazoo. It won’t last if they don’t make it open source. From a financial perspective, it doesn’t make sense in the long run. The only company it benefits is nvidia.

Each technique will find it's place. Calling it a crutch is perhaps quite crude to what it is. It's a tool much more than it is a crutch. Besides if up sampling is really not your thing, Nvidia still offers DLAA for those seeking a different anti-aliasing approach.

I’ve seen DLAA in action and in my opinion, it’s something that need not exist. I don’t foresee myself using it at any time in the future.
 
Last edited:
If you’re referring to computational cost, I’d argue against that line of thinking as the driving factor for the adoption of Rt accelerators. In my view, the real driving factor is the lack of scalability in man hours.

What problems do you propose can be solved by throwing more man hours at current tech? Art, animation? There are fundamental mathematical limitations to current graphics rendering techniques that no amount of developer time can fix.

I actually believe that these upsampling techniques will go away with time. They might be replaced with more efficient techniques or we might get to a point where there’s sufficient power that they’re not needed. I don’t share the belief that power will be used to drive increased screen resolutions. Consumers spending patterns on display devices suggests that we’re reaching a point where the resolution is good enough. I expect to see extremely poor adoption of 8k and a paradigm shift in display devices.

This will surely age very poorly. Just like all of the other “we’ll never need more than…” predictions in the history of technology.
 
Status
Not open for further replies.
Back
Top