Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I think the disagreements we’re encountering are based on entirely on the scope at which we view the discussion. Your arguments are predicated on the limitations and understanding of technology today. I’m not thinking about computer graphics in a 10 or 20 year scope, I’m thinking about how it will evolve over the next few hundred years. The tv of today is not the same tv that was invented in 1927. Neither is the phone the same as the first phone invented. The same can be said for computers, computer chips, and rendering techniques.

Computers and computer graphics is a field in its infancy compared to other professionals fields of study. I expect drastic changes as the field continues to mature. We won’t rely on silicon forever and we’re are already exploring alternatives like graphene nanoribbons, etc. Even the way we make chips will change. Photonic processors is an area that is being heavily researched.
I don't think it's of any value to talk about technologies so far out there that largely may never see existence in the customer marketplace. It seems like a quite a sideline discussion to the realities of today's problems.
If we are talking about the future of computation, the reality is, machine learning is much more the future of computation than brute force is.

AlphaGo is an AI that can decimate all other AIs and humans in GO, where the number of permutations in a GO board game exceeds the number of atoms in our known universe (http://norvig.com/atoms.html). AKA, it's computationally impossible for an AI to solve through brute force. Yet it's swiftly annihilates any existing AI and person on the planet, and typically humans annihilate GO AI because of the number of possible permutations. Solving GO is one of the best applications of machine learning because of this reason - it's not computationally possible to solve.

This really shows the power of approximation, and if you're talking about the future of power, approximation can go a significantly far way in modelling something without needing to calculate it. For this reason, while you see DLSS going away due to brute force, I see machine learning taking a greater position in it. The more things you want to model with additional complexity, the harder brute force calculation becomes. You will need approximation. I don't think it's worth while to have a debate over the idea that these algorithms are useless 10-20 years from now because we've somehow managed to unlock personalized quantum computing for home use.

Machine learning is computationally heavy, the more computational power we give it the better our models become in a given time frame. That means for a great deal of many tasks that don't require insane precision and accuracy, but has a very large open ended amount of possibilities, machine learning will always scale better than brute force.
 
Last edited:
I disagree, the gains happened due to driver improvements in the first few months after luanch, this is over now, the position of the 5700XT remained stationary for the past 18 months. In the future, I expect a quick nose dive as games start utilizing DX12U features.
Not true. In the link you provided it was 5% faster than a 2070. Now its 10% faster.


Even by your incorrect scenario, a 5% difference doesn't really change your gaming experience much, however being locked out of graphical features like Ray Tracing, Mesh Shaders is a lot worse for any player.
The 5700xt lacking features is something completely separate and doesn't invalidate that AMD benefits from being in both consoles.
 
PhysX is not an industry standard

G-Sync is not an industry standard

TXAA is not an industry standard

You see where I'm going with this?

Nvidia come out with some nice gaming tech but it never becomes industry standard.
There no industry standard for physics simulation, each engine is doing it's own thing, TAA is not an industry standard, many studios opted for their own variation of it as well, and it's about to be replaced by TAAU and it's variations, G-Sync is ubiquitous now and is everywhere, despite it being an upgrade over the basic featureless industry standard (VESA).

Most of these things that NVIDIA developed have yet to standardized, and it was never NVIDIA's aim to standardize any of them, they were just advancing graphics/display within their own ecosystem for their customers.

Speaking of things that reached standards levels from NVIDIA, I have to mention CUDA, the most widely used API for compute and AI, I also have to mention Optix, the most widely used API for professional visualization and professional 3D design. I also have to mention Ansel and Highlights, the most widely used photo mode interface on PC.

An honorable mention is 3D vision, it was the only game in town back then before VR replaced all the active 3D glasses.
The 5700xt lacking features is something completely separate and doesn't invalidate that AMD benefits from being in both consoles.
The presence of AMD in both consoles will in itself destroy the RDNA1 GPUs, as RDNA2 GPUs get used to their fullest, RDNA1 will fade away.
 
Last edited:
3D Vision was amazing when it was properly implemented. Absolutely head and shoulders above regular movie 3D or console gaming 3D (which is the same as movie 3D). I remember playing much of Skryrim in 3D sitting about 1ft away from my monitor. It was like VR except really sharp.
 
You didn't answer the question :)

Sorry but I didn’t understand your question in the context of my previous post. I asserted that a desire to reduce man hours and production costs for developers. You then asked a question unrelated to my train of thought in my previous comment.

I'm not so naive as to think we've already hit the limits of display technology after a few decades. Whatever advancements are ahead of us will require orders of magnitude higher rendering performance.

At no point did I suggest that we had hit the limits of display technology. I argued that resolution would not be the primary factor driving consumer adoption. I made that assertion because consumer spending patterns have highlighted that this is the case. Instead, we’ve seen tv manufacturers innovating in terms of form factor, panel type, feature set, etc. What does that assertion have to do with the limits of display technology? It seems to me that you just created a straw man to argue against because I never once suggested that. All I suggested was that a paradigm shift will occur and we’re already seeing that in the market.
 
I don't think it's of any value to talk about technologies so far out there that largely may never see existence in the customer marketplace. It seems like a quite a sideline discussion to the realities of today's problems.
If we are talking about the future of computation, the reality is, machine learning is much more the future of computation than brute force is.

AlphaGo is an AI that can decimate all other AIs and humans in GO, where the number of permutations in a GO board game exceeds the number of atoms in our known universe (http://norvig.com/atoms.html). AKA, it's computationally impossible for an AI to solve through brute force. Yet it's swiftly annihilates any existing AI and person on the planet, and typically humans annihilate GO AI because of the number of possible permutations. Solving GO is one of the best applications of machine learning because of this reason - it's not computationally possible to solve.

This really shows the power of approximation, and if you're talking about the future of power, approximation can go a significantly far way in modelling something without needing to calculate it. For this reason, while you see DLSS going away due to brute force, I see machine learning taking a greater position in it. The more things you want to model with additional complexity, the harder brute force calculation becomes. You will need approximation. I don't think it's worth while to have a debate over the idea that these algorithms are useless 10-20 years from now because we've somehow managed to unlock personalized quantum computing for home use.

Machine learning is computationally heavy, the more computational power we give it the better our models become in a given time frame. That means for a great deal of many tasks that don't require insane precision and accuracy, but has a very large open ended amount of possibilities, machine learning will always scale better than brute force.
I’m not really sure where to begin. I don’t know how you took my thoughts on DLSS path to irrelevance and equated it to ML’s path to irrelevance. I’m just going to bypass that completely because that’s not an assertion I ever made.

Now with regards to my comments on the disappearance of certain algorithms due to sufficient power. When there is sufficient computational power to render a calculation as trivial, then there will be little reason to approximate. Remember, this all spawned out of my comment that asserted that DLSS was/is a crutch. There will come a time when we can render path-traced scenes at a trivial cost without the need to use artifact infested approximations like DLSS. If you don’t agree with that premise, then the discussion cannot continue because we’re at an impasse. Just 10 years ago, we couldn’t dream of rendering path traced scenes in real time at playable frame rates. Even without the use of DLSS, the 3080 and 3090 can render fully path-traced scenes at frame rates we couldn’t have imagined before. So this notion that DLSS is here to stay forever is just a bunch of malarkey to me.
 
Sorry but I didn’t understand your question in the context of my previous post. I asserted that a desire to reduce man hours and production costs for developers. You then asked a question unrelated to my train of thought in my previous comment.

Right, which implies that the primary motivation for RT is a reduction in man hours and not that it's fundamentally impractical to do certain things with the current graphics stack regardless of man hours.

At no point did I suggest that we had hit the limits of display technology. I argued that resolution would not be the primary factor driving consumer adoption. I made that assertion because consumer spending patterns have highlighted that this is the case. Instead, we’ve seen tv manufacturers innovating in terms of form factor, panel type, feature set, etc. What does that assertion have to do with the limits of display technology? It seems to me that you just created a straw man to argue against because I never once suggested that. All I suggested was that a paradigm shift will occur and we’re already seeing that in the market.

We're talking about display technology that will put additional load on graphics rendering. Increased resolution is the most obvious one. The other is framerate. Another is VR.
 
I’m not really sure where to begin. I don’t know how you took my thoughts on DLSS path to irrelevance and equated it to ML’s path to irrelevance. I’m just going to bypass that completely because that’s not an assertion I ever made.

Now with regards to my comments on the disappearance of certain algorithms due to sufficient power. When there is sufficient computational power to render a calculation as trivial, then there will be little reason to approximate. Remember, this all spawned out of my comment that asserted that DLSS was/is a crutch. There will come a time when we can render path-traced scenes at a trivial cost without the need to use artifact infested approximations like DLSS. If you don’t agree with that premise, then the discussion cannot continue because we’re at an impasse. Just 10 years ago, we couldn’t dream of rendering path traced scenes in real time at playable frame rates. Even without the use of DLSS, the 3080 and 3090 can render fully path-traced scenes at frame rates we couldn’t have imagined before. So this notion that DLSS is here to stay forever is just a bunch of malarkey to me.

I don't agree, partly because you're missing my argument around scaling. For your argument to hold, is to make the assumption that developers should not aim to consume all the available power on a chip. That's just unlikely in a market where the boundaries of graphics are now consistently being asked to do more and more.

From my perspective, super Resolution is a solved problem in machine learning, with it looking incredibly close to native. At its core, DLSS is that, but they've had to effectively reduce it's quality to meet a specific frame time which is the part I don't think you've given a fair thought.

Your main axiom is ultimately that you can develop silicon and brute force faster than DLSS can improve (which they can easily make it better as the number of tensor cores increase); which is the precise flaw I've been trying to point out for some time now.

It's clearly not the case which is precisely why DLSS was invented in the first place. CUs cannot scale up as quickly as tensor cores can, just by the nature of their size. Improving a neural network to increase graphical quality is trivial until you place a stipulation of completing that job in 2ms or less. And both Nvidia and Intel have succeeded here but quality is dramatically reduced as a result. With more power, they can increase the depths of their networks completing the job in the same frame time while improving the quality of the output.

Brute force rendering will continually increase the number of resources needed as resolution increases. As load and resolution increase, that requires significantly more ALU and bandwidth resources per frame per second. There's no way around that. For ML, it doesn't matter what the input is, be it a blank screen to a the most complex scene, DLSS will always complete it's job in the same amount of time with the same amount of resources given a specific hardware configuration. That's just the way neural networks work. It's a fixed system.

Even if you solve the wattage and ALU power issue on a chip, the realities is that we cannot scale bandwidth as quick as we can scale ALU. There's just not nearly enough of it at price points that are reasonable. If the argument is that someday, ALU and bandwidth are infinite so we no longer have a need for scaling, then this argument was a waste of time to begin with. We can't simply handwave bottlenecks away, because it ignores the realities of why these technologies were created as solutions in the first place.

Like I'm trying to understand here...
a 3090 has 35 TF of power generated by 10496 CUDA cores
a fraction of their chip is tensor cores.
The 3090 has 142 TF of Tensor processing power generated by 328 tensor cores.

That's nearly a 5x difference in power at 3.1% of the number of cores. If we rebalanced the silicon if adoption was very high for Deep Learning SS, couldn't we increase the tensor cores by 10x and reduce the ALU because we only need enough to render 1080p?

Do you believe Nvidia couldn't produce a better network knowing there are now 1.42 Peta Tensor Flops?

Humour me for a moment:
Cyberpunk-2077-NVIDIA-GeForce-RTX-Official-PC-Performance-benchmarks-With-Ray-Tracing-DLSS-on-RTX-3090-RTX-3080-RTX-3070-RTX-3060-Ti-_1.png

How much larger or faster does the silicon need to be for the 3090 brute force to be able to project the frame rate of it's DLSS variant?
At least 2x likely more. 2x the ALU and 2X the bandwidth, nearly 2X the die size, and you'd need to double the amount of memory and double the bus size.

It would be trivial to scale tensor power and release heavier networks by comparison.

You speak about Photonic processors, but the main application for them is actually deep learning and AI.

https://news.mit.edu/2019/ai-chip-light-computing-faster-0605

Chip design drastically reduces energy needed to compute with light
Simulations suggest photonic chip could run optical neural networks 10 million times more efficiently than its electrical counterparts.

For electrical chips, including most AI accelerators, there is a theoretical minimum limit for energy consumption. Recently, MIT researchers have started developing photonic accelerators for optical neural networks. These chips perform orders of magnitude more efficiently, but they rely on some bulky optical components that limit their use to relatively small neural networks.

So we know the tensor processing cores are quite a bit smaller than CUDA cores - so once again, when it comes to scaling, if you managed to create a photonic processor, it would like be for AI before general purpose computing once again due to the size.
 
Last edited:
I don’t know how you took my thoughts on DLSS path to irrelevance and equated it to ML’s path to irrelevance.

Because you called it a "crutch" for a lack of better GPU technology even though it represents the cutting edge of GPU (processing and software) technology.

DLSS was, according to you, a crutch for weak GPUs. A crutch isn't a crutch because of a brand, it's a crutch because it's needed to support something, and DLSS is going massively beyond just 'supporting' a weakness. But other forms of ML AA must therefore also be a 'crutch' according to you. Sod the brand, you're all about "rendering technologies", right?

If the ML algorithmic, post rendering and temporally based approach to enhancing rendering is potentially beneficial, you wouldn't call it a crutch compared to 'standard' native rasterisation.

Now with regards to my comments on the disappearance of certain algorithms due to sufficient power. When there is sufficient computational power to render a calculation as trivial, then there will be little reason to approximate. Remember, this all spawned out of my comment that asserted that DLSS was/is a crutch. There will come a time when we can render path-traced scenes at a trivial cost without the need to use artifact infested approximations like DLSS. If you don’t agree with that premise, then the discussion cannot continue because we’re at an impasse. Just 10 years ago, we couldn’t dream of rendering path traced scenes in real time at playable frame rates. Even without the use of DLSS, the 3080 and 3090 can render fully path-traced scenes at frame rates we couldn’t have imagined before. So this notion that DLSS is here to stay forever is just a bunch of malarkey to me.

Camouflaging yourself in vague and non-committal clouds of vapid bullshit, and independently creating conditions and clauses wherein no-one can challenge the stupid stuff you've said is childlike.

No-one said (Nvidia) DLSS is here to stay, we just said that machine learning image processing offers amazing opportunities and it's in it's infancy, while you called the concept a crutch and then went all Karen.

Dude ... just, .... dude.
 
I don't agree, partly because you're missing my argument around scaling. For your argument to hold, is to make the assumption that developers should not aim to consume all the available power on a chip. That's just unlikely in a market where the boundaries of graphics are now consistently being asked to do more and more.

From my perspective, super Resolution is a solved problem in machine learning, with it looking incredibly close to native. At its core, DLSS is that, but they've had to effectively reduce it's quality to meet a specific frame time which is the part I don't think you've given a fair thought.

Your main axiom is ultimately that you can develop silicon and brute force faster than DLSS can improve (which they can easily make it better as the number of tensor cores increase); which is the precise flaw I've been trying to point out for some time now.

It's clearly not the case which is precisely why DLSS was invented in the first place. CUs cannot scale up as quickly as tensor cores can, just by the nature of their size. Improving a neural network to increase graphical quality is trivial until you place a stipulation of completing that job in 2ms or less. And both Nvidia and Intel have succeeded here but quality is dramatically reduced as a result. With more power, they can increase the depths of their networks completing the job in the same frame time while improving the quality of the output.

Brute force rendering will continually increase the number of resources needed as resolution increases. As load and resolution increase, that requires significantly more ALU and bandwidth resources per frame per second. There's no way around that. For ML, it doesn't matter what the input is, be it a blank screen to a the most complex scene, DLSS will always complete it's job in the same amount of time with the same amount of resources given a specific hardware configuration. That's just the way neural networks work. It's a fixed system.

Even if you solve the wattage and ALU power issue on a chip, the realities is that we cannot scale bandwidth as quick as we can scale ALU. There's just not nearly enough of it at price points that are reasonable. If the argument is that someday, ALU and bandwidth are infinite so we no longer have a need for scaling, then this argument was a waste of time to begin with. We can't simply handwave bottlenecks away, because it ignores the realities of why these technologies were created as solutions in the first place.

Like I'm trying to understand here...
a 3090 has 35 TF of power generated by 10496 CUDA cores
a fraction of their chip is tensor cores.
The 3090 has 142 TF of Tensor processing power generated by 328 tensor cores.

That's nearly a 5x difference in power at 3.1% of the number of cores. If we rebalanced the silicon if adoption was very high for Deep Learning SS, couldn't we increase the tensor cores by 10x and reduce the ALU because we only need enough to render 1080p?

Do you believe Nvidia couldn't produce a better network knowing there are now 1.42 Peta Tensor Flops?

Humour me for a moment:
Cyberpunk-2077-NVIDIA-GeForce-RTX-Official-PC-Performance-benchmarks-With-Ray-Tracing-DLSS-on-RTX-3090-RTX-3080-RTX-3070-RTX-3060-Ti-_1.png

How much larger or faster does the silicon need to be for the 3090 brute force to be able to project the frame rate of it's DLSS variant?
At least 2x likely more. 2x the ALU and 2X the bandwidth, nearly 2X the die size, and you'd need to double the amount of memory and double the bus size.

It would be trivial to scale tensor power and release heavier networks by comparison.

You speak about Photonic processors, but the main application for them is actually deep learning and AI.

https://news.mit.edu/2019/ai-chip-light-computing-faster-0605



So we know the tensor processing cores are quite a bit smaller than CUDA cores - so once again, when it comes to scaling, if you managed to create a photonic processor, it would like be for AI before general purpose computing once again due to the size.
The bold is not what I’m arguing at all and at no point did I ever say that? Like please quote me where I said those words in those terms. My only axiom is that cost of rendering path-traced scenes on GPUs will become trivial that people won’t need to worry about using DLSS. Even if DLSS provides additional performance, it doesn’t matter because the performance is sufficient enough.

Secondly, this idea of infinitely increasing resolution is perhaps the most backwards idea I’ve ever come across on this forum. It’s a disagreement that @trinibwoy and I have been having over multiple pages and I’ll get to his post later. Most companies don’t exist to primarily create technology for technology sake. They’re in business to design products that people will buy. With regards to display devices, the general consumer’s purchasing habits is showing us that they do not see increased resolution as a sufficient enough reason to purchase new display devices. It’s why the adoption of 4K is so low in comparison to 1080p and the adoption of 8k display devices is beyond laughable. It’s why people chose Netflix over 4k blu rays because, the additional quality provided by the increased bit rate and audio quality of 4k blurays did not supersede the convenience. The resolution was good enough. This is a tech forum and I understand that it’s nice to get lost in the tech but, it’s important not to lose sight of this point. So this idea of infinitely increasing resolution is beyond laughable to me. We will need additional computational power but, the idea that it’ll be mostly devoted to resolution is a point I’ll never agree on and I don’t care how unpopular this opinion is.
 
Right, which implies that the primary motivation for RT is a reduction in man hours and not that it's fundamentally impractical to do certain things with the current graphics stack regardless of man hours.



We're talking about display technology that will put additional load on graphics rendering. Increased resolution is the most obvious one. The other is framerate. Another is VR.
Honestly, I feel like you’re seriously losing context with regards to the discussion. This discussion is centred around Nvidia’s DLSS “crutch” for their underperforming hardware and that’s been the entire context of the discussion. The argument I made was made in this context. Of course certain things have always been impractical from a traditional rendering standpoint and it was always the end goal. It’s not the reason Nvidia rushed out this underperforming and under prepared hardware which is being held up by the crutch that is DLSS. Remove DLSS and it mostly falls on its face. Coincidentally, that’s exactly why a person might use crutches. Baked GI, Shadow Maps, SSR, etc were all used convincingly to fool the average user into thinking an image or render looks like real life. If you really needed accuracy, offline renders are a thing. So why did Nvidia rush their hardware out? To capitalize on the realities of today’s market. We’re still quite of gens away from robust real time path tracing. However, if you can accelerate the workload just a little bit, you can make a ton of money while doing it. If you think Nvidia is here for the passion of the technology, I’ve got a bridge to sell you. This current push towards real time RT is purely driven by the desire to increase profit by capitalizing on the realities of today’s markets. The old methods still work well in fooling the average consumer but, they don’t scale as the scope increases. For those of us who own Nvidia stock, we already know this reality and benefit from it every day. Go look at Nvidia’s growth as a company from the announcement of the first RT GPUs till today. Part of this growth is driven by the shortage. Part of it is driven by the fact that this move has created a near monopoly for them. Companies are spending huge amounts of money on their hardware and software stack in an attempt to reduce their production costs.

With regards to this discussion of infinitely increasing resolution and frame rate, please refer to my previous post above. It’s not a premise I’ll ever agree with you on because most companies do not exist to create technology for technology sake. They exist to design products that make them money. If most consumers don’t care about resolution past some arbitrary point, companies will look for other marketable features to entice buyers. As it stands, consumers are telling us that increasing the resolution is not a marketable selling point. As other display form factors evolve and mature, they’ll also reach an arbitrary point where the average consumer no longer cares about the increased resolution of the device. Once companies can’t sell or market on that point, the product reaches its maturity stage in the product lifecycle. Every class of product is subject to this lifecycle. For TVs, I’d argue that we’re in or very close to entering the mature stage of the lifecycle.
 
My only axiom is that cost of rendering path-traced scenes on GPUs will become trivial that people won’t need to worry about using DLSS. Even if DLSS provides additional performance, it doesn’t matter because the performance is sufficient enough.
So, when has anything ever been "enough" in terms of computer graphics. And if we are quantifying that as some sort of acceptable frame rate at some sort of acceptable image quality, then DLSS or any other machine learning solution that is capable of providing significantly more samples per output pixel with a minimal hit to performance, why wouldn't you use that? I get that there are current in use implementations that have ghosting issues and other artifacts, but that's mostly been solved in the more up to date .dll's, and adjusting games to run at an internal resolution that matches your monitor while super sampling it from a higher output resolution back down to native (via DSR) makes games look great with a minimal performance hit compared to just native resolution rendering. Looking at the future, with a hypothetical "sufficient enough" amount of processing power per pixel, you would still be able to provide more detail if you used a DLSS-like machine learning image enhancement.

I don't understand why you think it's a crutch. That would imply DLSS needs to be leaned on for cards supporting it to be comparable with the average. But that isn't the case. nVidia has ballpark comparable rasterization and compute performance and better ray tracing performance compared to their rivals, and DLSS is enough of a disruption that both Intel and AMD have both showcased performance enhancing upscaling solutions. And AMD's solution is extremely lossy in image quality comparisons.
 
Yea, it's not a crutch at all. It's making a top product even better... Nvidia isn't "relying" on it to catch up to competitors. The only way I would consider it a crutch was if AMD was ~20-30% faster than Nvidia without the need of any upscaling, and Nvidia REQUIRED the upscaling to be competitive with them..

THEN I might consider it a crutch in that case...

But come on now... that's not the case and Nvidia simply made a killer feature which utilizes their newest hardware and architecture to the fullest.
 
So, when has anything ever been "enough" in terms of computer graphics.
Anisotropic Filtering x16 or any other feature that nets no perceptible gains.
And if we are quantifying that as some sort of acceptable frame rate at some sort of acceptable image quality, then DLSS or any other machine learning solution that is capable of providing significantly more samples per output pixel with a minimal hit to performance, why wouldn't you use that? I get that there are current in use implementations that have ghosting issues and other artifacts, but that's mostly been solved in the more up to date .dll's,
The text in bold is false as most of the issues haven’t been resolved. I don’t use it because I can render at the resolution I want with my GPU exceeding my monitor refresh rate. The added performance is not beneficial to me and I won’t degrade my image quality with visual artifacts and ghosting in the name of DLSS. I can see the artifacts and they’re beyond awful.
and adjusting games to run at an internal resolution that matches your monitor while super sampling it from a higher output resolution back down to native (via DSR) makes games look great with a minimal performance hit compared to just native resolution rendering. Looking at the future, with a hypothetical "sufficient enough" amount of processing power per pixel, you would still be able to provide more detail if you used a DLSS-like machine learning image enhancement.

Sorry, are you saying that DLSS provides more detail than exists in the original image? That’s not how it works at all. DLSS is at best a visually clearer approximation of the next frame. The neural network is trained with images from games and it uses that training to “intelligently” approximate the next frame from a lower temporal resolution. DLSS requires a TAA implementation to even work and if you want to blame anything for the lack of image clarity, blame TAA which imo is also awful. The artifacts I complain about are details added to the frame that didn’t exists in the original frame. DLSS is essentially polluting the frame when it fails and it fails often. I can understand how people get caught up in the DLSS hype when Youtubers try to pass opinion off as fact by citing ridiculous marketing claims. “DLSS is better than native resolution” and other erroneous statements.
I don't understand why you think it's a crutch. That would imply DLSS needs to be leaned on for cards supporting it to be comparable with the average. But that isn't the case. nVidia has ballpark comparable rasterization and compute performance and better ray tracing performance compared to their rivals, and DLSS is enough of a disruption that both Intel and AMD have both showcased performance enhancing upscaling solutions. And AMD's solution is extremely lossy in image quality comparisons.
I think in it’s a crutch because if you go in the dictionary and look up the definition of crutch, it describes what DLSS is doing for Nvidia’s substandard hardware. Just because Nvidia is doing better than AMD doesn’t affect the definition of the word. When the Oxford dictionary changes the definition of the word, I’ll find the next best word to describe DLSS.
 
Last edited:
Is it as laughable as you repeating a nonsensical statement that nobody said?
It’s a bit rich for you to claim that I misquoted you when that’s all you’ve been doing to my posts since the beginning of our discussion. When I called you out, you failed to acknowledge your misstep and instead conveniently diverted the discussion in a different direction towards another straw man you had constructed. Forgive me for not feeling sympathetic to your complaints.

Secondly, every time I suggested that resolution would peak, you suggested that it wouldn’t. I don’t know how else you’d described a resolution target that always is increasing as implied by your responses. As of our last on topic discussion, you said:
We're talking about display technology that will put additional load on graphics rendering. Increased resolution is the most obvious one. The other is framerate. Another is VR.
Again highlighting this ever increasing resolution target.
 
Last edited:
Sorry, are you saying that DLSS provides more detail than exists in the original image? That’s not how it works at all.
That depends on the rendering resolution. You keep looking at DLSS as a performance saving solution where you render a image at a subnative (native being your output resolution) resolution, and that's certainly a use case, but you can also use DSR to to a higher than native resolution, adjust the DLSS settings to have the internal rendering resolution to match native, have DLSS reconstruct the image to a supernative resolution and then supersample it back down to native for glorious results. There is a cost to this, obviously. And that depends on the DLSS performance of your GPU. But it's a hell of a lot less than rendering at the higher resolution DLSS resolves to.

To be clear, the scenario I presented earlier and the one I'm presenting now isn't one where you are rendering less pixels than you would have with DLSS disabled. If you start at the same image quality, for a minor performance hit, DLSS can produce many more samples per pixel, and it looks great.

Anisotropic Filtering x16 or any other feature that nets no perceptible gains.
Anything over 4x AF was "enough" at 1024*768. 16x might be enough for you now at the resolutions you like, but in your hypothetical situation where we have infinite power per pixel, why wouldn't we have infinite pixel density as well. You can already start to see the limitation of 16x AF at 4k. Hell, maybe even sometimes at 1440p if you don't disable the optimizations in the control panel. And those limitations are only going to get worse as resolution increases.

The text in bold is false as most of the issues haven’t been resolved. I don’t use it because I can render at the resolution I want with my GPU exceeding my monitor refresh rate. The added performance is not beneficial to me and I won’t degrade my image quality with visual artifacts and ghosting in the name of DLSS. I can see the artifacts and they’re beyond awful.
Do you not use TAA? That ghosts as well.
 
It’s a bit rich for you to claim that I misquoted you when that’s all you’ve been doing to my posts since the beginning of our discussion. When I called you out, you failed to acknowledge your misstep and instead conveniently diverted the discussion in a different direction towards another straw man you had constructed. Forgive me for not feeling sympathetic to your complaints.

Secondly, every time I suggested that resolution would peak, you suggested that it wouldn’t. I don’t know how else you’d described a resolution target that always is increasing as implied by your responses. As of our last on topic discussion, you said:

Again highlighting this ever increasing resolution target.

How does the obvious fact that we haven’t hit a resolution peak yet equate to “infinite” and “ever increasing” resolution? I’m sure you can explain it.
 
Last edited:
Status
Not open for further replies.
Back
Top