Practical cases.
How practical is a modern high end GPU in combination with a low end CPU anyways? Yes AMD gpus dont stress CPUs as much, but how many actually had problems with this before we started talking about driver overhead?
Practical cases.
Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever. i7-2600K from 2011 was still popular even among forum-level enthusiasts when Zen2 hit the markets. Earlier Ryzen generations while started the movement, didn't perform well enough to make big enough impact.While true, usually at least with previous generations, you had to upgrade whenever a new generation started. Last time this wasnt really the case, but now i think 3700x should be the minimal going forward.
Its not all that strange, going from PSX to PS2 generation, to PS3 etc you had to upgrade everything.
Extremely relevant, see above. Many have been upgrading only their GPUs past years.How practical is a modern high end GPU in combination with a low end CPU anyways? Yes AMD gpus dont stress CPUs as much, but how many actually had problems with this before we started talking about driver overhead?
Can you give an example of a DX12 exclusive game from 2018 which would be CPU limited on any CPU?There was, just not on reviews because they use highend CPUs. Actual gamers use everything from lowend to highend.
Low end CPU could be what was considered a high end CPU when they were initially purchased. And that doesn't mean it should just be ignored. There's always the case of those who didn't know they were having issues and could have possibly had better performance out of a different GPU choice.How practical is a modern high end GPU in combination with a low end CPU anyways? Yes AMD gpus dont stress CPUs as much, but how many actually had problems with this before we started talking about driver overhead?
Doubtful as it would show up in all APIs in this case.Personally I don't think this is a driver solvable problem for Nvidia, it's maybe a design choice going back generations for scheduling.
Why would it need to be DX12 exclusive?Can you give an example of a DX12 exclusive game from 2018 which would be CPU limited on any CPU?
Doubtful as it would show up in all APIs in this case.
Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever. i7-2600K from 2011 was still popular even among forum-level enthusiasts when Zen2 hit the markets. Earlier Ryzen generations while started the movement, didn't perform well enough to make big enough impact.
Because if it's not then an Nv GPU user can just use other API without that issue?Why would it need to be DX12 exclusive?
Can you elaborate on how the data points of one API points you to anything in another?What a ridiculous assumption. The data points completely towards the other direction. Its precisely the DX11 performance that helps suggest why DX12 is suffering.
It does not have to be DX12, BF V chokes the nV driver just fine with DX11 (which is actually the preferred mode of the players as DX12 stutters a lot on any CPU/GPU combos)Can you give an example of a DX12 exclusive game from 2018 which would be CPU limited on any CPU?
I've been using 2500k since 2011 till zen+ era when I upgraded to x470 mostly out of curiosity. I'd say most people (those who actually do it, I mean, not the average joe that buys a black box in the closest Wallmart/Mediamarkt etc) upgrade their GPUs first and then they go for full platform upgrade (and it's what the reviewers / tech-tubers always advise).Upgrade CPU every generation? What? Where? Ryzen 3000 series was the first time people really started to see the need for CPU upgrades since forever
This goes against everything reported by HUB though, isn't it?It does not have to be DX12, BF V chokes the nV driver just fine with DX11 (which is actually the preferred mode of the players as DX12 stutters a lot on any CPU/GPU combos)
I play BFV almost daily with DX12, 0 stutters. It used to stutter but not anymore.It does not have to be DX12, BF V chokes the nV driver just fine with DX11 (which is actually the preferred mode of the players as DX12 stutters a lot on any CPU/GPU combos)
Because if it's not then an Nv GPU user can just use other API without that issue?
SOTTR run fine enough on Nv GPUs in DX12, it's not like it can't hold 60 fps.
Can you elaborate on how the data points of one API points you to anything in another?
You will have to explain more since I don't understand how you arrive to the conclusion where a results in DX11 explain the issue in DX12.Because they require different hardware considerations and it is becoming well apparent you cannot have a jack of all trades between all of them. The evidence is not pointing anywhere else. Build for one, suffer in another.
How so? You probably haven't watched the video till the end, it's specifically mentioned there. And who is "you people", might I ask? And what the issue is as well...This goes against everything reported by HUB though, isn't it?
This issue isn't as simple as some of you people want to paint it.
Good, when I played it was absolutely unplayable.I play BFV almost daily with DX12, 0 stutters. It used to stutter but not anymore.
You design your architecture and software stack according to what API you think the market will favour in the present/near future. You don't become DX12 compliant by simply printing the words "DX12 support" on the box.You will have to explain more since I don't understand how you arrive to the conclusion where a results in DX11 explain the issue in DX12.
You design your architecture and software stack according to what API you think the market will favour in the present/near future. You don't become DX12 compliant by simply printing the words "DX12 support" on the box.
Nvidia bet on DX11 on their hardware design. When you design to be very fast on one specific API, you are make concessions in others. Its the same reason an ASIC monumentally kicks every general purpose chip in its ass at one specific feature support.
And to clarify, today we are not surprised that they have DX12 overhead issues with Pascal (they emulated features like async compute), we are surprised because the same overhead remains in Turing and Ampere. It means their stack still suffers from legacy decisions.