Bear in mind the cpu is far more sensitive to latency than the GPU, so any contention is going to have a bigger impact. It's the effect on the CPU that I'm interested in.
I think by giving CPUs a fixed budget, no matter what solution is picked, it should be fairly easy to avoid contention ...
I remember there has been a discussion here about DDR3 vs GDDR5 and the general consensus and data reveals that there is no significant difference of latencies.
Code is more complex or dataset smaller.
All of this applies equally to the contention between the 8 CPU cores. You have 9 entities trying to access the memory at the same time, but 8 of them have priority over the GPU. I think contention between cores should logically dwarf any impact from the GPU, and it still doesn't seem to be an issue. The memory arbitration is probably designed to deal with this.
Ignore any raw latency difference then, it's the contention that induces additional latency.
I wasn't really saying differing latencies were a big factor, more of a contributing factor. To be honest I don't know what those KPIs mean. But I can see 10% ish in most of them, which in a stall situation could add 10% to the bottom line...which when you have logical dependencies in your threads would have more of an impact.I remember there has been a discussion here about DDR3 vs GDDR5 and the general consensus and data reveals that there is no significant difference of latencies. IIRC GDDR5 has relatively higher clock latency but GDDR5 also runs at a much higher clocks, in the 5.5~7Ghz range while current DDR3s run in the 1.6~2.3 Ghz range. If you factor in the higher clock latencies with the actual clocks themselves together they cancel each other out. If you measure the time itself I think they are generally the same.
Here's the info.
http://www.hynix.com/datasheet/pdf/dram/H5TQ1G4(8_6)3AFP(Rev0.1).pdf
http://www.hynix.com/datasheet/pdf/graphics/H5GQ1H24AFR(Rev1.0).pdf
Examples here are
GDDR5 timings as provided by Hynix datasheet:
CAS = 10.6ns
tRCD = 12ns
tRP = 12ns
tRAS = 28 ns
tRC = 40ns
DDR3 timings for Corsair 2133@11-11-11-28
CAS = 10.3ns
tRCD = 10.3ns
tRP = 10.3ns
tRAS = 26.2ns
tRC = 36.5ns
So latency wise you probably shouldn't see a disadvantage of GDDR5 as they're in the same ballpark.
Sorry I'm on a phone and quoting/bolding is difficult, but the guy who replied to you saying the cpu could be the looser in all this, that's the point I am making. Multi core coding is difficult at best, and there going to be logical dependencies between the threads. ANY increased latency of cpu requests could impact more than just the thread concerned. Multiply that by the number of threads and the potential for stalls (logical or physical) has to go up.
It doesn't. But the more simultaneous requests you make the more contention you have. If you'd found a way to get 3/4 of your requests going to a separate pool, you'd have less of a problem in the first place...Why does contention induce additional latency just for PS4?
It might do, but it's complex to do hey? And if you were aiming to squeeze every last drop of performance, you'd be doing that anyway, whatever the platform.I have said something different: I have said to group threads that access similar data in the same cluster, in order to maximize the potential good effects of shared L2.
The dev should, on the other hand, optimize the code on the opt stage to keep a coherent access pattern (easier cache reuse among threads in same cluster?).
It might actually reduce average accesses to RAM from CPUs, due to average better local data coherency.
I don't think the split achieves that, I think it stops the GPU monopolising all the bandwidth. I think there are two measures getting confused. One is the bandwidth, one is the latency of any given request. Each cycle the memory bus can only do one thing. Service one client. Switching between them has a latency cost. I'm not sure there is a limit on that cost (eg 150 vs 20). I could be wrong....
There is that too, but I'm just looking at skyrim with 4k texture mods which can run @ 30fps on a 7850 and I ask why do so many ps4 game have not impressive textures? We all know very very high textures res on PCs only has a moderate impact on gpus.
What is this thread, internet warriors have discovered what game devs and Sony/AMD engineers have not? Maybe Sony found some holes they can access to reduce contention?