Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

I would argue it's more important that DLSS as it works on so much more hardware so is more likely to replace DLSS as the go-to solution.

Consoles already have an abundance of upscaling techniques so it's not really important for console like it is PC.

A developer can focus on DLSS and only have a select few people benefit (And not even all Nvidia owners benefit) from it or they can invest in FSR2.0 and get it on all consoles and a much, much bigger PC audience.

I know which solution I would go with.
Games have DLSS because Nvidia incentivizes it.. It's a marketing bullet point.. and these days it doesn't seem particularly difficult to implement.. so I doubt much changes.

Nvidia will just move on to the next thing for AMD to copy.
 
Pretty sure that's just transcribing the commentary to make the article. Makes no sense to write a whole bespoke article that just says the same thing as the commentary.

I've seen people do it both ways. It's unusual to see written copy so closely follow verbal commentary because long sentence are best avoided in verbal delivery. Yours eyes can skim back in written, but nobody is going to wind back for the start of your sentence 20 seconds ago. That's what I was taught anyway, but that was a while ago before social media was popular.

I'm sure Alex can clarify, but I would have thought DF just write a single copy intended for the article then record it, perhaps making some tweaks in recording for the video where they are referring to things on screen.
 
Games have DLSS because Nvidia incentivizes it.. It's a marketing bullet point.. and these days it doesn't seem particularly difficult to implement.. so I doubt much changes.

Nvidia will just move on to the next thing for AMD to copy.

I think it will completely change regardless if Nvidia pay for it.

If you're going to go through the trouble of implementing FSR2.0 on consoles and none RTX architectures on PC then what's the point in then spending man power on a DLSS implementation just to get that extra 5% better IQ on? It's not as if RTX GPU's can't run FSR2.0.

It makes no sense in an industry where everything is so time sensitive.
 
Last edited:
We'll wait another FSR2.0 tests but for now I was right with my theory I had about DLSS and consoles: Console don't need it as they can do better (or about the same) with less dedicated silicon (no need for int4, int8) using smarter custom reconstruction techs. Now that we can compare DLSS2 against FSR2.0, it's like comparing DLSS2 against something like Insomniac tech (as both techs are very similar) and the results are exactly what I was expecting for now: DLSS is no better than the best techs used on consoles (and those don't need tensor cores).
 
We'll wait another FSR2.0 tests but for now I was right with my theory I had about DLSS and consoles: Console don't need it as they can do better (or about the same) with less dedicated silicon (no need for int4, int8) using smarter custom reconstruction techs. Now that we can compare DLSS2 against FSR2.0, it's like comparing DLSS2 against something like Insomniac tech (as both techs are very similar) and the results are exactly what I was expecting for now: DLSS is no better than the best techs used on consoles (and those don't need tensor cores).

Insomniac's temporal injection is god like on console and the only other instance that get's close to it is the checkerboarding used in Days Gone.
 
Insomniac's temporal injection is god like on console and the only other instance that get's close to it is the checkerboarding used in Days Gone.
I clearly saw difference in 30 vs 60fps modes in Miles and Ratchet tough there is no pc version with dlss so hard to compare
 
If you're going to go through the trouble of implementing FSR2.0 on consoles and none RTX architectures on PC then what's the point in then spending man power on a DLSS implementation just to get that extra 5% better IQ on? It's not as if RTX GPU's can't run FSR2.0.

Now discount the consoles since FSR2.0 wont see any wide adoption there since custom temporal hand-coded reconstruction already exists there. Were looking at 80% of the pc gaming market then being on NV hardware. DLSS is quicker to implement, faster and superior IQ (in special from very low resolutions).

We'll wait another FSR2.0 tests but for now I was right with my theory I had about DLSS and consoles: Console don't need it as they can do better (or about the same) with less dedicated silicon (no need for int4, int8) using smarter custom reconstruction techs. Now that we can compare DLSS2 against FSR2.0, it's like comparing DLSS2 against something like Insomniac tech (as both techs are very similar) and the results are exactly what I was expecting for now: DLSS is no better than the best techs used on consoles (and those don't need tensor cores).

This guy understands it more. I wouldnt see why FSR2.0 would see widespread use if any on consoles when many developers have their own custom temporal upscaler that probably is better for their games then FSR2 would be anyways. It was actually the pc that was lacking this usefull feature last generation, not the consoles. DLSS and now FSR2.0 means we finally have a solution to everyone in the pc gaming space.
DLSS is used for training the models (deep learning to improve not when gaming, but to train the data model for the next iteration). So no, you wont need acceleration, but then you need fine hand-tuned work instead.

I wonder though, do for example Insomniac's tech upscale from very low resolutions aswell? With DLSS i can get a 1080p image very close to native 4k or better with very little loss in performance.
 
I think it will completely change regardless if Nvidia pay for it.

If you're going to go through the trouble of implementing FSR2.0 on consoles and none RTX architectures on PC then what's the point in then spending man power on a DLSS implementation just to get that extra 5% better IQ on? It's not as if RTX GPU's can't run FSR2.0.

It makes no sense in an industry where everything is so time sensitive.
I believe that the input needed for FSR 2.0 is essentially the same for DLSS, though. And at this point, with modern engines being already tested with DLSS on PC, I believe we are going to see more and more games simply support both. I think at this point, as long as you've set the stage for either upscaling it's a small task to implement both.
 
I believe that the input needed for FSR 2.0 is essentially the same for DLSS, though. And at this point, with modern engines being already tested with DLSS on PC, I believe we are going to see more and more games simply support both. I think at this point, as long as you've set the stage for either upscaling it's a small task to implement both.

But then there's the testing required for each method, to which DLSS is just something else to test for what is in comparison a very small user base.

Maybe Nvidia could release DLSS without the ML aspect and have it work on all architectures like FSR2.0 (Doubtful)

Nvidia have come out with some awesome technology in the past (PhysX is my favourite) and they've always ended up dying when a more open solution becomes available because Nvidia's refused to allow their tech on other platforms which has killed off their technology.

I feel that over the next 12 months DLSS is going to be another victim of that and if Intel get their AI upscaling working on consoles and a broad range of PC GPU's than DLSS will well and truly be finished.
 
Last edited:
But then there's the testing required for each method, to which DLSS is just something else to test for what is in comparison a very small user base.

Maybe Nvidia could release DLSS without the M aspect and have it work on all architectures like FSR2.0

Nvidia have come out with some awesome technology in the past (PhysX is my favourite) and they've always ended up dying when a more open solution becomes available because Nvidia's refused to allow their tech on other platforms which has killed off their technology.

I feel that over the next 12 months DLSS is going to be another victim of that.

Very small user base? Around 80% of the gpu gamer market are NV, its the largest userbase, by far. PhysX wasn't killed either.
 
But then there's the testing required for each method, to which DLSS is just something else to test for what is in comparison a very small user base.

Maybe Nvidia could release DLSS without the M aspect and have it work on all architectures like FSR2.0

Nvidia have come out with some awesome technology in the past (PhysX is my favourite) and they've always ended up dying when a more open solution becomes available because Nvidia's refused to allow their tech on other platforms which has killed off their technology.

I feel that over the next 12 months DLSS is going to be another victim of that.
I still think it's probably worthwhile. I think the real question is, when XESS is released, will games support all 3?

Also, PhysX was not created by nVidia, they bought it. There was always a CPU fallback option for non-nVidia GPUs. And it's been open source for almost 4 years and AMD hasn't yet made their GPUs accelerate it in hardware.
 
I still think it's probably worthwhile. I think the real question is, when XESS is released, will games support all 3?

I can not wait to see....... PS4/Xbone starter their generation with MLAA and FXAA and ended it with clever upscaling tech.

Imagine the end of the current generation, they'll be upscaling to 4k from 240p at no quality difference compared to native :runaway:
 
But then there's the testing required for each method, to which DLSS is just something else to test for what is in comparison a very small user base.

Maybe Nvidia could release DLSS without the ML aspect and have it work on all architectures like FSR2.0 (Doubtful)

Nvidia have come out with some awesome technology in the past (PhysX is my favourite) and they've always ended up dying when a more open solution becomes available because Nvidia's refused to allow their tech on other platforms which has killed off their technology.

I feel that over the next 12 months DLSS is going to be another victim of that and if Intel get their AI upscaling working on consoles and a broad range of PC GPU's than DLSS will well and truly be finished.

If you go by Steam surveys, RTX makes up 25% of their userbase and is about 2.5X larger than all AMD gpus combined. FSR 2 doesn't work on anything from Nvidia thats below a GTX1070. Nvidia gpus below a 1070 is probably the largest user segment on Steam.
 
Last edited:
Couldn't decide where to put this so thought in here might be best?

Overclocked PS3 RSX with custom firmware tested.


It's also a really good channel itself, loads of PS2 vs GC vs DC vs Xbox comparisons.

And in shock PS2 holds it's own against GC more than you think going by some of these videos.
 
Very small user base? Around 80% of the gpu gamer market are NV, its the largest userbase, by far.
yes, as DLSS is only supported by Turing and Ampere. Most nv-GPUs on the market are older than that and do not support DLSS. Even if DLSS could be used via shaders, nvidia specifically does not support this option. But this way even older cards could use it.

PhysX wasn't killed either.
PhysX accelerated by the GPU is as good as dead. On PC nvidia even restricted the CPU acceleration to one core on consoles they could use all CPU-cores. So nvidia is only preventing a good physx CPU-acceleration on PC (for non-nvidia GPUs). This way Nvidia actively killed physx in the PC space. For the normal user it was to slow (because only one core could be used or the GPU was not capable enough at the time) and today it doesn't really play a big role.

I still think it's probably worthwhile. I think the real question is, when XESS is released, will games support all 3?

Also, PhysX was not created by nVidia, they bought it. There was always a CPU fallback option for non-nVidia GPUs. And it's been open source for almost 4 years and AMD hasn't yet made their GPUs accelerate it in hardware.
You can't blame AMD for not supporting nvidia technologies. Almost always if a game is optimized for AMD cards and is in the nvidia-support-program the games tend to change through the nvidia libraries directly before release (as developers must use the newest versions). So often the games than run horrible on AMD cards because the optimizations aren't compatible anymore. It really seems that nvidia uses its position really aggressively and this happend quite often when a new game with a known IP launched.
I really can't remember a thing developed by nvidia that made it into the market as standardized feature. They were never open with their new features and the market had always to develop a new standard in order that everyone could use it.
 
Last edited:
Assassin Creed Origins patch only uses cross-gen tools to unlock the framerates. There is no native PS5 version and no native Xbox Series X version (the game is only gen9aware).

Assassin’s Creed Origins renders at a virtually constant 2160p/60fps on Xbox Series X. On PS5, however, you have to settle for 1620p/60fps due to running backwards compatibility with the PS4 Pro version. Videos to come.

And Series S, 1080p/60fps


 
Back
Top