AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

While true, usually at least with previous generations, you had to upgrade whenever a new generation started. Last time this wasnt really the case, but now i think 3700x should be the minimal going forward.
Its not all that strange, going from PSX to PS2 generation, to PS3 etc you had to upgrade everything.
Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever. i7-2600K from 2011 was still popular even among forum-level enthusiasts when Zen2 hit the markets. Earlier Ryzen generations while started the movement, didn't perform well enough to make big enough impact.

How practical is a modern high end GPU in combination with a low end CPU anyways? Yes AMD gpus dont stress CPUs as much, but how many actually had problems with this before we started talking about driver overhead?
Extremely relevant, see above. Many have been upgrading only their GPUs past years.
 
How practical is a modern high end GPU in combination with a low end CPU anyways? Yes AMD gpus dont stress CPUs as much, but how many actually had problems with this before we started talking about driver overhead?
Low end CPU could be what was considered a high end CPU when they were initially purchased. And that doesn't mean it should just be ignored. There's always the case of those who didn't know they were having issues and could have possibly had better performance out of a different GPU choice.

Personally I don't think this is a driver solvable problem for Nvidia, it's maybe a design choice going back generations for scheduling.
 
Can you give an example of a DX12 exclusive game from 2018 which would be CPU limited on any CPU?
Why would it need to be DX12 exclusive?
Shadow of the Tomb Raider seems like a good candidate. DX12 is faster than DX11 in it on both AMD and NVIDIA with highend CPUs and as shown by HUB, clearly has AMD ahead of NVIDIA on lowend CPUs even with GTX 1070 Ti - 1080 level cards.
Slow'ish CPU + GTX 1070 or 1080 would be realistic combination for many
 
Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever. i7-2600K from 2011 was still popular even among forum-level enthusiasts when Zen2 hit the markets. Earlier Ryzen generations while started the movement, didn't perform well enough to make big enough impact.

Indeed, the need for hardware upgrades hasnt really been there since early 2010's. Usually, you needed to upgrade your hardware whenever a new console generation had began.

Anyway, if your stuck on low end hardware, dont want RT or reconstruction tech, stick with AMD.
 
Why would it need to be DX12 exclusive?
Because if it's not then an Nv GPU user can just use other API without that issue?
SOTTR run fine enough on Nv GPUs in DX12, it's not like it can't hold 60 fps.

What a ridiculous assumption. The data points completely towards the other direction. Its precisely the DX11 performance that helps suggest why DX12 is suffering.
Can you elaborate on how the data points of one API points you to anything in another?
 
Can you give an example of a DX12 exclusive game from 2018 which would be CPU limited on any CPU?
It does not have to be DX12, BF V chokes the nV driver just fine with DX11 (which is actually the preferred mode of the players as DX12 stutters a lot on any CPU/GPU combos)
 
Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever
I've been using 2500k since 2011 till zen+ era when I upgraded to x470 mostly out of curiosity. I'd say most people (those who actually do it, I mean, not the average joe that buys a black box in the closest Wallmart/Mediamarkt etc) upgrade their GPUs first and then they go for full platform upgrade (and it's what the reviewers / tech-tubers always advise).
 
It does not have to be DX12, BF V chokes the nV driver just fine with DX11 (which is actually the preferred mode of the players as DX12 stutters a lot on any CPU/GPU combos)
This goes against everything reported by HUB though, isn't it?

This issue isn't as simple as some of you people want to paint it.
 
Because if it's not then an Nv GPU user can just use other API without that issue?
SOTTR run fine enough on Nv GPUs in DX12, it's not like it can't hold 60 fps.


Can you elaborate on how the data points of one API points you to anything in another?

Because they require different hardware considerations and it is becoming well apparent you cannot have a jack of all trades between all of them. The evidence is not pointing anywhere else. Build for one, suffer in another.
 
This goes against everything reported by HUB though, isn't it?

This issue isn't as simple as some of you people want to paint it.
How so? You probably haven't watched the video till the end, it's specifically mentioned there. And who is "you people", might I ask? And what the issue is as well...

I play BFV almost daily with DX12, 0 stutters. It used to stutter but not anymore.
Good, when I played it was absolutely unplayable.
 
You will have to explain more since I don't understand how you arrive to the conclusion where a results in DX11 explain the issue in DX12.
You design your architecture and software stack according to what API you think the market will favour in the present/near future. You don't become DX12 compliant by simply printing the words "DX12 support" on the box.

Nvidia bet on DX11 on their hardware design. When you design to be very fast on one specific API, you are make concessions in others. Its the same reason an ASIC monumentally kicks every general purpose chip in its ass at one specific feature support.

And to clarify, today we are not surprised that they have DX12 overhead issues with Pascal (they emulated features like async compute), we are surprised because the same overhead remains in Turing and Ampere. It means their stack still suffers from legacy decisions.
 
And do go all sad with that decision. They won their bet. They won the market. But the fight never stops and AMD has been designing DX12 since 2012. Eventually the market would turn to DX12, and all it cost AMD to bet on DX12 so early was everything.
 
You design your architecture and software stack according to what API you think the market will favour in the present/near future. You don't become DX12 compliant by simply printing the words "DX12 support" on the box.

Nvidia bet on DX11 on their hardware design. When you design to be very fast on one specific API, you are make concessions in others. Its the same reason an ASIC monumentally kicks every general purpose chip in its ass at one specific feature support.

And to clarify, today we are not surprised that they have DX12 overhead issues with Pascal (they emulated features like async compute), we are surprised because the same overhead remains in Turing and Ampere. It means their stack still suffers from legacy decisions.

The deliberations that go into an API update would have gone on between the primary stakeholders and Microsoft in the case of DX12. What AMD chose to do with Mantle in terms of interacting with the graphics pipeline and what it chose to expose represents one direction to take. I don't know if Nvidia was betting on DX11 or if it had a different idea of where the API had to go. I remember there was some speculation about an emphasis on expanding upon indirect dispatch or some other GPU-side expansion of capability, based on OpenGL extensions at the time.
How much Nvidia knew of AMD's implementation prior to Mantle is unclear, so saying Nvidia bet on DX11 because it didn't run an API AMD hadn't developed until years in the future seems to ignore the turnaround times involved with hardware architecture. It's difficult to be timely to market with something in silicon unless you were well-versed in it years in advance.

AMD may have used Mantle to force a transition and perhaps the direction of it, and Nvidia's ray-tracing and tensor work may be an example of it being done in the other direction.
 
My 1080ti/6700k which were both close to the best available for gaming at the time have certainly suffered the effects of this overhead for some time now in a variety of titles. Just never knew it was specific to Nvidia since I haven't used an AMD GPU during this time.

BFV is flat out impossible to have an enjoyable MP experience in this title at any settings/resolution at higher than 60 FPS. Capping it at 60 is the only way to prevent a rollercoaster of frame times and stutters. API is irrelevant here. It’s a very poor experience. The previous two Tomb Raiders also have sections where despite the use of low level APIs I was surprised how low performance was when CPU limited. Same for Hitman 2. Watch Dogs Legion is another one, 60 FPS is completely impossible. The listing of games can go on but i think this is enough to demonstrate many people can face this issue, not just a hypothetical pairing of a 3090 with a 1st gen Ryzen.
 
Back
Top