https://www.heise.de/ct/entdecken/?volltext=ryzen&sort=datum_ab&hauptrubrik=Test+&+KaufberatungAnd what is that exactly? Source please?
I think you need to buy it to read the review.
https://www.heise.de/ct/entdecken/?volltext=ryzen&sort=datum_ab&hauptrubrik=Test+&+KaufberatungAnd what is that exactly? Source please?
Fixed that for you, CF scaling gives a definitive boost on Ryzen compared to Intel (at least in Tomb Raider)
- Nvidia GPU + Intel CPU = Good
- AMD GPUS + Intel CPU = Good
- AMD GPUS + AMD CPU = Good
- Nvidia GPU + AMD CPU = Bad
Here is the FuryX vs 1070 benches on Ryzen, all games are tested with DX12:I fully expect a FuryX to not follow the same path, this one will not suffer scaling issues between AMD and Intel, it will have lower DX12 performance compared to a 980Ti/1070 in this title. Thus it will reveal Ryzen DX12 performance for what it truly is. This is the original correct premise without any unstable variables.
Isn't that what Adored TV did when coming out with his rushed unfounded theory? the 1070 was well OC'ed in his video, so it was massively underutilized, On contrary to the 480CF which had massive scaling under Ryzen. Not the single card though, The 480 is close to the FuryX and the 1070 with Ryzen because they are CPU limited in the area of Geothermal valley (It's a CPU hog). The Titan XP was only 12% faster than the 1070 in this area, again because it's CPU limited.Isn't he hitting GPU bottleneck with the Fury X though? 100% GPU util with Fury X and not with the 1070.
If you did so, you will find all sorts of anomalies in both AMD and NV GPUs, as both drivers become CPU limited. You will see Ryzen smashing hard against the CPU limitation wall. Intel CPUs will massively advance here as they usually do in almost all game benchmarks. Remember, single thread is not Ryzen's strong point. We also know AMD GPUs become CPU limited faster than NV @DX11. FuryX loses the 1080p and 1440p but put up a good fight @4K.One way to ago about it is keep every setting at Ultra but drop the res to the point where both GPUs are not being utilized fully. Then we can see how AMD and Nvidia behave with Ryzen.
Fixed that for you, CF scaling gives a definitive boost on Ryzen compared to Intel (at least in Tomb Raider)
Here is the FuryX vs 1070 benches on Ryzen, all games are tested with DX12:
Nothing out of the ordinary, the 1070 easily beats the FuryX especially in Ashes of The singularity. And decimates it in Tomb Raider (23% faster).
Actually, NVIDIA was a Ryzen launch partner. AMD made a big deal of this late last year at Sonoma, to show that even though they also compete on GPUs, they weren't going to shoot themselves in the foot by excluding NVIDIA on Ryzen. This way you could have both the best video card (Titan XP) and the best CPU (Ryzen).In the end, it's not NVidia's fault as they haven't been programming for Ryzen for months. It's probably fair to say that reviewers who see gigantic differences are just lazily assuming that their test is valid.
The PC version of Tomb Raider is an Nvidia sponsored game...so.you can probably forget about that they don't care at all about performance on competing GPUs at all (can't blame them..they are a business not a charity)Isn't DX 12 is all about giving full control to developers?
Maybe AMD should talk to Crystal Dynamics? They worked with Oxide and boost AotS performance on NVidia hardware (see footnote) -> http://www.amd.com/en-us/press-releases/Pages/stardock-and-oxide-2017mar30.aspx
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.So yes, in some (rare? maybe evolving into a trend - who knows?) cases, the Fury X can be more forgiving on the CPU than the GTX 1080 - or probably rather their respective drivers.
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.
Were there any early access groups that did not encounter issues with Ryzen? TBH coordination could have been better based on initial "witch hunts" to identify performance issues. And once the performance issues cropped up it seemed none of the early access groups had any clue ... hardware, operating systems, and software were all open as possible causes.Actually, NVIDIA was a Ryzen launch partner. AMD made a big deal of this late last year at Sonoma, to show that even though they also compete on GPUs, they weren't going to shoot themselves in the foot by excluding NVIDIA on Ryzen. This way you could have both the best video card (Titan XP) and the best CPU (Ryzen).
I don't know exactly how long they've had a Ryzen, but they were certainly one of the early access groups. However I would be surprised if that value isn't "months," going back at least as far as December 2016.
The PC version of Tomb Raider is an Nvidia sponsored game...so.you can probably forget about that they don't care at all about performance on competing GPUs at all (can't blame them..they are a business not a charity)
I'd wait a few months for Nvidia to sort their drivers out before saying that.
Ryzen userbase probably still to small for them to be relevant.
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.
FWIW, my colleague compared 20 games with and without hyperthreading with a 1080 Ti and a Fury X in 720p (with AA/AF and AO turned off as much as possible):
In Rise of the Tomb Raider and Hitman (both in DX12 mode) the Fury X managed to beat the 1080 Ti.
In AotS and Sniper Elite, the 1080 Ti was faster, as well as in all DX11 and DX9 titles (Overwatch, For Honor, BF1, Fallout 4, DEMD, Witcher 3, Goat Sim, FC Primal, Grid Autosport, GR Wildlands, GTA 5, WoW, WD2 and TW:WH). Curiously, in Outcast 1.1, which uses a software-render, the Fury X was ever so slightly faster as well (24,1 vs. 23,7 fps with and 22,8 vs. 22, without SMT).
So yes, in some (rare? maybe evolving into a trend - who knows?) cases, the Fury X can be more forgiving on the CPU than the GTX 1080 - or probably rather their respective drivers.
It is still curious it happens only in Tomb Raider and Hitman. Ashes for example uses a lot more CPU resources, but doesn't show the same issue. And we know from HardwareUnboxed testing it only happens @720p in Tomb Raider. The 1070@1080p didn't exhibit any problemMaybe. Maybe the Nvidia driver is just more heavily threaded, increasing contention for the CPU's ressources, which in some cases is detrimental to performance? I don't know.
Still, this is different than the whole CF thing. Testing @720p is a good way to expose CPU limitation further. what we have here
It is still curious it happens only in Tomb Raider and Hitman. Ashes for example uses a lot more CPU resources, but doesn't show the same issue. And we know from HardwareUnboxed testing it only happens @720p in Tomb Raider. The 1070@1080p didn't exhibit any problem