Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

I guess you are going to be out of luck this gen then, if UE5 demos are any indications, then many games will rely on TSR to deliver their next gen visuals. And TSR is often inferior to DLSS. Console gaming is out of luck for you too, as upscaling is prevalent there in almost EVERY title, with far inferior results to DLSS.

Please don't use a single link vs the many links I provided for your above. If you test a realtively light and small area in Metro as Computerbase did then AMD GPUs do fine, if you test any area with heavy use of RT GI and reflections (as done by other smart sites) then they are crippled hard.
Some of those sites aren’t reputable. Overclock3d in particular is awful with results that make no sense. RT tests at 1440p make more sense than 4k given current GPU performances.
 
Some of those sites aren’t reputable. Overclock3d in particular is awful with results that make no sense. RT tests at 1440p make more sense than 4k given current GPU performances.
Now you are just cherrypicking, all of these sites are reliable and their results mirror each other well, even Hardware Unboxed achieved similar results.

Fact is once more, AMD GPUs demonstrates an awful lack of performance compared to NVIDIA GPUs in any serious RT workload.
 
Now you are just cherrypicking, all of these sites are reliable and their results mirror each other well, even Hardware Unboxed achieved similar results.

Fact is once more, AMD GPUs demonstrates an awful lack of performance compared to NVIDIA GPUs in any serious RT workload.

07200440456l.jpg


Is that data reputable?
 

Computerbase tested with "normal" which uses a 4:1 pixel:ray ratio. PCGH used "ultra" which is 1:1 and Ampere is much better:
Outdoor: https://www.pcgameshardware.de/Metr...us-Enhanced-Edition-Test-Vergleich-1371524/3/
Indoor: https://www.pcgameshardware.de/Metr...Enhanced-Edition-Test-Vergleich-1371524/3/#a5

/edit: The same is true for Guardians of the Galaxy, too. PCGH tested with "Ultra" and the 3090 is nearly 90% faster in 4K:
https://www.pcgameshardware.de/Guar.../Specials/PC-Benchmark-Test-Review-1382318/2/
 
Last edited:
@Silent_Buddha if your target is 1% lows above 120fps you will probably never see RT your lifetime :LOL:

Yeah that kind of makes me sad. However, I hold out hope that some developer will have RT in a title with custom baseline reconstruction that in combination with some future graphics card will allow me to have RT in some games.

Note that the key thing WRT DLSS Quality is when compared to DLSS off. If DLSS was the base quality of the game on all available hardware then I'd obviously have nothing to compare it to and it would likely be OK. For practical purposes this means that a game the requires either a generalized or game specific reconstruction method in order to even run because there is no render option without it will be fine as that's the "baseline" image quality of the game.

I suspect but don't have the time to test (since I only have access to a 3090Ti at my friend's house) that the few games where I find DLSS Quality mode to be an improvement over the baseline (DLSS OFF) is when the game is engineered around some form of temporal reconstruction which has numerous enough rendering anomalies that the DLSS Quality actually improves upon it without introducing more anomalies that I can easily perceive.

In practical terms that means if, for example, a UE5 game ships and the default rendering of the game is with UE5's generalize temporal solution then that would be the baseline. IE - it'd be fine. And if my above suspicion is correct then DLSS Quality might be good enough to be used, then question, of course, becomes whether or not it can hit a locked or virtually locked 120 Hz and if it can't, is it a game that is soo incredibly good (like Elden Ring where I can't lock it to 60 Hz with a GTX 1070) where I'm willing to suffer through a compromised experience?

Regards,
SB
 
How is that related to the conversation about RT performance, though? I don't follow.
I put links from these sites: PCGamesHardware, Hardware Unboxed, TechPowerUp, Sweclockers, Tom's Hardware, Comptoir Hardware, PurePC, KitGuru, Goldem.de, and Overclock3D, all pointing to the 3090Ti being between 60% and 120% faster than a 6900XT, and still some people don't see this as crippling AMD GPUs.

Computerbase tested with "normal" which uses a 4:1 pixel:ray ratio. PCGH used "ultra" which is 1:1 and Ampere is much better:
Outdoor: https://www.pcgameshardware.de/Metr...us-Enhanced-Edition-Test-Vergleich-1371524/3/
Indoor: https://www.pcgameshardware.de/Metr...Enhanced-Edition-Test-Vergleich-1371524/3/#a5
Even with RT NORMAL the 3090 is 60% faster than 6900XT at 4K, RT ULTRA pushes that to 75% faster at 4K. Even at 1440p, the gap is 70%! Talk about a massive difference.

Recent 3090Ti benchmarks puts the difference even further to 80% and beyond. Some people are just out of touch with reality.
 
If the consoles are 1080p30 then even an RTX3090 isn't enough for 1440p/60 without dropping settings.
Previous UE5 demos had the 3090 delivering 4K60, when the PS5/XSX only did 1440p30. That's 3X faster right there. And without factoring in Ray Tracing performance at all.

Matrix City is heavily CPU bound on PC at the moment, GPUs are not used to their fullest, and thus are delivering subpar performance, when the demo is fully optimzied, then we can compare performance, otherwise, it's not a base for comparison.
At PS5's launch the fastest GPU was the RTX3090 that offered 2x PS5's raster performance, 15 months after PS5 released the fastest GPU was still the RTX3090 offering 2x the raster performance.
Times are different, silicon performance is slowing down, PC upgrade cycle is slowing down as well, previously it was every 12 to 18 months, now it's 24 months, it's just the way things are technology wise.

Next year there will be an RTX 4090 with 4X the performance of a PS5. Don't worry, the status quo won't change much, it's just been delayed by a few months.
 
Previous UE5 demos had the 3090 delivering 4K60, when the PS5/XSX only did 1440p30. That's 3X faster right there. And without factoring in Ray Tracing performance at all.

Matrix City is heavily CPU bound on PC at the moment, GPUs are not used to their fullest, and thus are delivering subpar performance, when the demo is fully optimzied, then we can compare performance, otherwise, it's not a base for comparison.

Times are different, silicon performance is slowing down, PC upgrade cycle is slowing down as well, previously it was every 12 to 18 months, now it's 24 months, it's just the way things are technology wise.

Next year there will be an RTX 4090 with 4X the performance of a PS5. Don't worry, the status quo won't change much, it's just been delayed by a few months.
I see we are comparing GPUs here. So were the 3090 paired with 2700x or similar CPUs ?
 
  • Like
Reactions: snc
We are 1 month away from when 980ti would have launched. It certainly isn’t a guarantee that we will see the 4000 series GPUs this year. Techpowerup 980ti launch review.

perfrel_2560.gif


A 7850 would be at 29%. That puts 980ti at 3.37x faster.

AT 1440p which the 7850 is not designed to handle and is not at all representative of what the consoles would have been running at.

From TPU at 1080p (PS4 resolution):

https://www.techpowerup.com/gpu-specs/radeon-hd-7850.c1055

306% as fast.

There were instances of games using some Maxwell features. HFTS, VXAO etc. If consoles tried to run them performance differences would be far greater than 3.37x.

If.... but are there any actual examples or is this just theory, unlike the current real world RT differences?

How many non Nvidia sponsored titles make heavy enough use of RT to cripple AMD GPUs?

Using NV sponsored titles at max RT settings isn't what's giving us the 3x performance figures here. This is with the relatively paired back RT settings that the consoles use. How else could Alex be seeing the 3x performance advantage without normalising the settings to the console level. With RT dialled up to crazy levels like in some NV sponsored games the difference would almost certainly be much larger. In some cases we're seeing the 3090 doubling the performance of the 6900XT in the PC space.
 
Yeah that kind of makes me sad. However, I hold out hope that some developer will have RT in a title with custom baseline reconstruction that in combination with some future graphics card will allow me to have RT in some games.

Note that the key thing WRT DLSS Quality is when compared to DLSS off. If DLSS was the base quality of the game on all available hardware then I'd obviously have nothing to compare it to and it would likely be OK. For practical purposes this means that a game the requires either a generalized or game specific reconstruction method in order to even run because there is no render option without it will be fine as that's the "baseline" image quality of the game.

I suspect but don't have the time to test (since I only have access to a 3090Ti at my friend's house) that the few games where I find DLSS Quality mode to be an improvement over the baseline (DLSS OFF) is when the game is engineered around some form of temporal reconstruction which has numerous enough rendering anomalies that the DLSS Quality actually improves upon it without introducing more anomalies that I can easily perceive.

In practical terms that means if, for example, a UE5 game ships and the default rendering of the game is with UE5's generalize temporal solution then that would be the baseline. IE - it'd be fine. And if my above suspicion is correct then DLSS Quality might be good enough to be used, then question, of course, becomes whether or not it can hit a locked or virtually locked 120 Hz and if it can't, is it a game that is soo incredibly good (like Elden Ring where I can't lock it to 60 Hz with a GTX 1070) where I'm willing to suffer through a compromised experience?

Regards,
SB

While I totally respect your position on native vs upscaled image quality, as a fellow 1070 user I have to ask, how would 4K DLSS quality be any worse than what you must be using right now? We're pretty lucky if we can run modern titles with most of the bells an whistles (no RT obviously) at a native 1440p. 4K is pretty much off the table. And 4k DLSS Q would certainly be much better image quality than native 1440p since it uses 1440p as it's base resolution to upscale from.
 
I see we are comparing GPUs here. So were the 3090 paired with 2700x or similar CPUs ?
Did this happen with PS4 and Xbox One? Was anyone testing cards back then with 1.6GHz Jaguars or even Bulldozer cores back then?

While I totally respect your position on native vs upscaled image quality, as a fellow 1070 user I have to ask, how would 4K DLSS quality be any worse than what you must be using right now? We're pretty lucky if we can run modern titles with most of the bells an whistles (no RT obviously) at a native 1440p. 4K is pretty much off the table. And 4k DLSS Q would certainly be much better image quality than native 1440p since it uses 1440p as it's base resolution to upscale from.
This isn't pointed at me, but every person I know that dislikes the current state of DLSS-like reconstruction cite the new artifacts introduced by the process and not the native resolution. It's the trails, burring, and other artifacts that people who are sensitive to that dislike.
 
This isn't pointed at me, but every person I know that dislikes the current state of DLSS-like reconstruction cite the new artifacts introduced by the process and not the native resolution. It's the trails, burring, and other artifacts that people who are sensitive to that dislike.

Blurring isn’t unique to DLSS. TXAA/TAA also blur the image but people put up with it because it’s the only way to get rid of much more annoying temporal aliasing and shader artifacts. I think most people would make that same trade today.

DLSS ghosting is a much bigger issue. The hand coded reprojection and rejection logic in TAA seems to do a better job of avoiding it. DLSS apparently can’t tell exactly where a pixel has moved in some cases. It’s handling of per object motion vectors needs some work.

Having said that these issues aren’t present in every game so it really should be a game by game decision to use it or not.
 
This is not at all my experience after seeing DLSS in so many titles. DLSS tends to be better at reprojecting the last frame than nearly all TAA.
One of the problems with DLSS I have so far is, that you can't easily choose the base-resolution. Most of the time you just set e.g. 1440p and activate DLSS (+ preference for quality). Problem so far is that at least I have my problems with DLSS because image quality gets a big hit. The image gets noticeable more blurry in movement. E.g. Control is such a game I really can't play with DLSS. Root-cause is, that if you use TAA you have 1440p as base resolution and than TAA is applied additional. With DLSS the base resolution wents down to sub 1080p and gets "upscaled" to 1440p. This is a big communication-problem with DLSS. Often it is presented as replacement for TAA. Well it is at parts but at the same time it is used to boost the performance. It would be much better if developers would implement more control over DLSS and offer a super resolution mode out of the box. So if your resolution is 1440p directly offer a mode with 1440p as base resolution and than enhance the image or enhance the performance. So so much for the blurriness. If the base-resolution is >1080p for the image, the blurriness in motion is much better though.

The artifacts is another problem, but all TAA methods introduce them. Well DLSS got better over time but still this is a major problem for me.

Next problem is, that it might look inconsistent. I have so far often seen this problem. It is also a problem of "sharpening" filters. Some textures get much sharper, while others are .... well not that sharp. E.g. I even often do not increase AF because it can give games an oversharped and inconsistent look.
 
  • Like
Reactions: snc
DLSS ghosting is a much bigger issue. The hand coded reprojection and rejection logic in TAA seems to do a better job of avoiding it. DLSS apparently can’t tell exactly where a pixel has moved in some cases. It’s handling of per object motion vectors needs some work..

Ghosting happens at high contrast edges. TAA would be either non existent (flickering) or reducing the in picture contrast massively. With DLSS the contrast is still much better and flickering is handled better, too.
 
Back
Top