AMD RyZen CPU Architecture for 2017

Really? Because no one plays on high refresh monitors, or wants to push AA or details more, or many other reasons we like to tweak things on PC to get the best that our hardware can do. Let's just sell all our GPUs and buy consoles. Sorry, what forum section are we in?

Why not test at those settings then?

I can fully understand the proposition if it enables better quality, more consistent performance but when you already average over 100 fps it sure seems more academic than actually useful.

Totally playable is relative. Civ 6 is IMO totally playable at 40fps, CSGO under 130 not so much.

Indeed, not questioning that.
 
Why not test at those settings then?

I can fully understand the proposition if it enables better quality, more consistent performance but when you already average over 100 fps it sure seems more academic than actually useful.
It's not academic if you just bought a 240hz monitor. 1M viewers watched the last CSGO major, many of them want a CPU that won't be a bottleneck. I'm not trying to take away from AMD's achievement but the "good enough" angle doesn't work for everyone.
 
He updated his chart with 3466Mhz LL memory. Still seeing healthy growth in fps overall, and all this pretty much achieved by just new AGESA versio which allows for better control.

68n8LJN.png
 
As @Malo previously stated 2 weeks or so is the average time for hardware vendors to update their BIOS.
I believe my reference was 2 weeks or so after their flagship BIOS was updated for the peasant motherboards.
 
He updated his chart with 3466Mhz LL memory. Still seeing healthy growth in fps overall, and all this pretty much achieved by just new AGESA versio which allows for better control.

68n8LJN.png
from that same page, it seems like diminishing returns start after 3200MHz.

witcher-3-memtest-v4-png.21212
 
from that same page, it seems like diminishing returns start after 3200MHz.

Could be. Or we might be simpy getting more and more GPU bound when we're getting gpu better fed better little by little. Well the speed penalty that comes from infinity fabric's latency is getting smaller and smaller all the time when it's speed increases. Also different games react a bit differently to latency / bandwidth changes. Thought Witcher 3 does seem to scale well with latency, if you compare 2400LL against 2933 for example.
 
Yeah, Witcher 3 is probably GPU bound after 3200LL in that case. One game that keeps on scaling is Fallout 4, for whatever reason it likes both bandwidth and low latency. I've been running stable at 3466CL14 with 14-14-14-34-48 and tight sub-timings for a while now, AGESA 1.0.0.6 definitely helped with that.
 
Yeah, Witcher 3 is probably GPU bound after 3200LL in that case. One game that keeps on scaling is Fallout 4, for whatever reason it likes both bandwidth and low latency. I've been running stable at 3466CL14 with 14-14-14-34-48 and tight sub-timings for a while now, AGESA 1.0.0.6 definitely helped with that.
bad programming = need more of everything. (low latency because lots of indirections, high bandwidth for the same reason and lots of branching...)
 
Last edited:
Yeah, Witcher 3 is probably GPU bound after 3200LL in that case. One game that keeps on scaling is Fallout 4, for whatever reason it likes both bandwidth and low latency. I've been running stable at 3466CL14 with 14-14-14-34-48 and tight sub-timings for a while now, AGESA 1.0.0.6 definitely helped with that.

Nah, still not GPU-bound. From a random 3333LL "bench scenario":
3333LLGPUusage.JPG
 
Back
Top