IHV Business strategies and consumer choice

That's a completely inappropriate response. If the data's no good, explain why and present data that is good. Everyone is going to present data that supports their view - no-one's going around looking for data that disproves themselves. If you're lucky, you'll get someone who's more interested in the truth than an outcome and they'll present data just to add to the knowledge pool. In a public forum, these data come together from opposing viewpoints to present an overview that points to the truth. If someone's position is right, the more data we get, the more they are proven right, so no-one should be against More Data unless they fear they are wrong and they have trouble dealing with not being right 100% of the time.
It's a proportionate response and it's precisely that he's going to engage in confirmation bias with a specific adversarial pattern as a rationale for it. The discussion was already headed in the direction of beyond reasonable discourse ...
In the subsequent discussion, I see DavidGraham is presenting data from numerous vendors. You've presented only one piece which has been pointed out as skewed by clockspeed. The request stands for standardised base-hardware comparison and you need to present info on this. Fixating on an in-game benchmark also isn't ideal as people don't play the benchmark. Broad sampling of the entire game across multiple users and different playthroughs will give a bit insight into how the game plays across hardware.
@Bold David already presented that data for everyone. In the worst case scenario both referenced hardware are dead even at the in-game benchmark according to gameGPU or DSO.

If your impression is that of me dismissing custom test scenes then you're off course in this case. The reason why many others prefer the in-game benchmark is due to highly controlled conditions, level playing field, and ease of comparisons between the different data points. You can't exactly compare results between the different observers if they have different scenes as well even amongst themselves ...
These kind of discussion are so dumb. They are very easy to conclude. Over time, lots of vendors do lots of tests. We just need collate all the data in one place and examine it and we'll see the truth. Someone could pony up the money and have these guys produce a meta-analysis, but we should be able to take a dozen benchmarks and get some rough averages ourselves.
It really isn't easy to conclude especially if an opponent like David is going to insist on including dubious data from a tester where both the 7900 XT/XTX are literally equal in performance at 1080p or 1440p which is virtually out of line with everyone else's results. If anyone is going to resort to using plausible deniability as a basis to continue their line of reasoning then they shouldn't get an easy pass for it since it becomes a tedious exercise for those who did honourably exchanged their thoughts with them ...
 
Stop making it about your perception of his strategy. Whatever evidence he brings to the table, you can bring your own. Your argument (anyone's argument) shouldn't be to try to discredit the opponent but to prove yourself right. Just post more data! Overload the argument in your favour with counter-evidence instead of grumbling that someone is presenting alternative facts to you. This is the only way to conduct legitimate conversation over a disagreement in a somewhat objective discussion. ;)

Importantly, don't make it about convincing DavidGraham he's wrong. That's not going to happen. The defence in a court case isn't going to turn around and say, "you know what, you're right." Well, unless you present an overwhelming case and they choose to make a deal. You are more trying to convince everyone else here whether the 7900 is performing 20% better than the 4080 in AC Mirage.
 
"More data" isn't always necessarily better if the quality of the samples in question are of low value hence why we see sample rejection being widely practiced during aggregate analysis and I'm not personally interested in participating in an argumentum ad populum fallacy ...

Back on track to the main topic, the preliminary results for the Modern Warfare 3 reboot look very promising as did the Modern Warfare 2 reboot before it ...
 
Is Nvidia underperforming at lower resolutions or is AMD underperforming at higher resolutions?

Would guess the former because higher resolutions “should” be more efficient on all GPUs. More pixels = higher cache hit ratio and better latency hiding.
 
Is Nvidia underperforming at lower resolutions or is AMD underperforming at higher resolutions?
No one is "underperforming" anywhere. RDNA2 GPUs tend to show more performance loss in comparison to Ampere GPUs when going into higher resolutions because they have narrower buses and slower DRAM, and their cache doesn't help as well as it does in lower resolutions. This is pretty much remedied with RDNA3 vs Lovelace which tend to have similar buses now and both are using big caches to cover the bandwidth deficit.
 
No one is "underperforming" anywhere. RDNA2 GPUs tend to show more performance loss in comparison to Ampere GPUs when going into higher resolutions because they have narrower buses and slower DRAM, and their cache doesn't help as well as it does in lower resolutions. This is pretty much remedied with RDNA3 vs Lovelace which tend to have similar buses now and both are using big caches to cover the bandwidth deficit.
It could also be looked at as recent Nvidia architectures gaining less from lowering resolution. Infinity cache is less useful at higher resolutions as AMD has stated, but do these huge compute arrays on big Nvidia GPUs have utilization issues at lower resolutions?
 
It could also be looked at as recent Nvidia architectures gaining less from lowering resolution. Infinity cache is less useful at higher resolutions as AMD has stated, but do these huge compute arrays on big Nvidia GPUs have utilization issues at lower resolutions?
Lower resolution relative performance issues are linked to Ampere mostly and it is hard to say what the reason for them is but my guess would be the CPU limitations which Nv h/w used to hit more often than AMD's. Lovelace doesn't show the same behavior to the same degree at least.
 
This is pretty much remedied with RDNA3 vs Lovela
Well, according to ComputerBase, RDNA3 lose their edge fast in AC Mirgae, the 7900XTX starts out 15% faster than 4080 at 1080p, then collapses to a 7% lead at 4K. Same thing happens with RDNA2, the 6800XT vs the 3080, 18% lead at 1080p collapses to a 5% lead at 4K. This is using the latest driver from both vendors.

I am guessing it's something to do with the engine of the game itself.

 
Last edited:
Further testing for Lords of the Fallen, At 4K, the 4080 leads the 7900XTX by 13% without RT and by 9% with RT.


PCGH shows an 11% lead for the 4080 over the 7900XTX at 4K.

 
Last edited:
I'm interessted why nVidia performs so much better here than in other UE5 games. The 4090 is more than 2x faster than the 6900XT...
 
I'm interested why nVidia performs so much better here than in other UE5 games. The 4090 is more than 2x faster than the 6900XT...
I am interested in knowing why would you say that, other UE5 games don't behave like the Remnants 2 or Immortals of Aveum.

For example, in The Talos Principle 2 full game, the 4090 is 2.2x times faster than 6900XT.

In Fort Solis, a 4090 is 2x times the performance of a 6900XT.

In Fortnite UE5 the 4090 is 2.5x times faster than 6950XT.

Desordre and The Lords of the Fallen too as you know from the posted benchmarks. Even Immortals of Aveum will probably change it's behavior once HW Lumen is supported in the game.
 
Last edited:
Even Immortals of Aveum will probably change it's behavior once HW Lumen is supported in the game.
Doubtful that this has anything to do with h/w RT being supported or not. A game can still be better optimized for some specific h/w even if it's using a third party engine.
 
It seems like the relationship between Remnant 2/Immortals and why Lords of the Fallen has a somewhat mediocre showing on AMD might be down to the latter not using virtual shadow mapping. A more solid winning formula is for them to push UE5 developers into using virtual shadow maps ...

I guess that would explain The Talos Principle 2 better results especially at lower resolutions ...
 
Back
Top