Is PS4 hampered by its memory system?

Discussion in 'Console Technology' started by oldschoolnerd, Sep 26, 2013.

Thread Status:
Not open for further replies.
  1. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,762
    Likes Received:
    2,639
    Location:
    Maastricht, The Netherlands
    I think by giving CPUs a fixed budget, no matter what solution is picked, it should be fairly easy to avoid contention ...
     
  2. Strange

    Veteran

    Joined:
    May 16, 2007
    Messages:
    1,698
    Likes Received:
    428
    Location:
    Somewhere out there

    I remember there has been a discussion here about DDR3 vs GDDR5 and the general consensus and data reveals that there is no significant difference of latencies. IIRC GDDR5 has relatively higher clock latency but GDDR5 also runs at a much higher clocks, in the 5.5~7Ghz range while current DDR3s run in the 1.6~2.3 Ghz range. If you factor in the higher clock latencies with the actual clocks themselves together they cancel each other out. If you measure the time itself I think they are generally the same.

    Here's the info.

    http://www.hynix.com/datasheet/pdf/dram/H5TQ1G4(8_6)3AFP(Rev0.1).pdf
    http://www.hynix.com/datasheet/pdf/graphics/H5GQ1H24AFR(Rev1.0).pdf

    Examples here are
    GDDR5 timings as provided by Hynix datasheet:
    CAS = 10.6ns

    tRCD = 12ns

    tRP = 12ns

    tRAS = 28 ns
    tRC = 40ns


    DDR3 timings for Corsair 2133@11-11-11-28

    CAS = 10.3ns

    tRCD = 10.3ns

    tRP = 10.3ns

    tRAS = 26.2ns

    tRC = 36.5ns

    So latency wise you probably shouldn't see a disadvantage of GDDR5 as they're in the same ballpark.
     
    #22 Strange, Sep 26, 2013
    Last edited by a moderator: Sep 26, 2013
  3. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    I don't think the split achieves that, I think it stops the GPU monopolising all the bandwidth. I think there are two measures getting confused. One is the bandwidth, one is the latency of any given request. Each cycle the memory bus can only do one thing. Service one client. Switching between them has a latency cost. I'm not sure there is a limit on that cost (eg 150 vs 20). I could be wrong....
     
  4. MrFox

    MrFox Deludedly Fantastic
    Legend

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    All of this applies equally to the contention between the 8 CPU cores. You have 9 entities trying to access the memory at the same time, but 8 of them have priority over the GPU. I think contention between cores should logically dwarf any impact from the GPU, and it still doesn't seem to be an issue. The memory arbitration is probably designed to deal with this.
     
  5. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    Ignore any raw latency difference then, it's the contention that induces additional latency.
     
  6. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    I have said something different: I have said to group threads that access similar data in the same cluster, in order to maximize the potential good effects of shared L2.
    The dev should, on the other hand, optimize the code on the opt stage to keep a coherent access pattern (easier cache reuse among threads in same cluster?).

    It might actually reduce average accesses to RAM from CPUs, due to average better local data coherency.
     
  7. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    So are all the GPU cores making memory requests funnelled through one aggregator? Makes sense to do that. But it's going to be very greedy...and still there is the problem I suggested in the OP. When the memory controller is faced with arbitrating a conflict does it interrupt the in-flight GPU request, or does it make the CPU wait? Option one has to lead to increased contention, option 2 leads to increased latency for the cpu. ?
     
  8. Strange

    Veteran

    Joined:
    May 16, 2007
    Messages:
    1,698
    Likes Received:
    428
    Location:
    Somewhere out there
    Why does contention induce additional latency just for PS4? :???:
     
  9. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    I wasn't really saying differing latencies were a big factor, more of a contributing factor. To be honest I don't know what those KPIs mean. But I can see 10% ish in most of them, which in a stall situation could add 10% to the bottom line...which when you have logical dependencies in your threads would have more of an impact.

    Having said that, if the latencies were identical my point still stands ...the more simultaneous requests made to the memory bus, the more contention. The PS4, without a dual pool configuration has loads of requests hitting their single pool.
     
  10. Pixel

    Veteran

    Joined:
    Sep 16, 2013
    Messages:
    1,008
    Likes Received:
    477
    Perhaps there are some oddities with gddr5 that were not expected by AMD or Sony. Why else is AMD backing off its gddr5 based Kaveri PC apu?

    There is that too, but I'm just looking at skyrim with 4k texture mods which can run @ 30fps on a 7850 and I ask why do so many ps4 game have not impressive textures? We all know very very high textures res on PCs only has a moderate impact on gpus.
     
    #30 Pixel, Sep 26, 2013
    Last edited by a moderator: Sep 26, 2013
  11. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    It doesn't. But the more simultaneous requests you make the more contention you have. If you'd found a way to get 3/4 of your requests going to a separate pool, you'd have less of a problem in the first place...
     
  12. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    It might do, but it's complex to do hey? And if you were aiming to squeeze every last drop of performance, you'd be doing that anyway, whatever the platform.
     
  13. Pixel

    Veteran

    Joined:
    Sep 16, 2013
    Messages:
    1,008
    Likes Received:
    477
    Instead of looking at it on a bandwidth basis, look at it from a commandclock/cycles basis. How many memory cycles (ballpark range) would something that saturates a 7850 bandwidth (like Skyrim with 4k texture mods) take?
    Are the remaining comandclock/ memory cycles sufficient to feed the latency sensitive cpu with a tiny cache? If not they'd have to compromise on feeding the gpu in a cycles basis.
     
    #33 Pixel, Sep 26, 2013
    Last edited by a moderator: Sep 26, 2013
  14. gurgi

    Regular

    Joined:
    Jul 7, 2003
    Messages:
    605
    Likes Received:
    1
    PS4 has low res textures because of CPU latency created by UMA memory contention?
     
  15. MrFox

    MrFox Deludedly Fantastic
    Legend

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    We are entering conspiracy theory land. :roll:
    I would have said it's FUD land already, but no, it's a bit further up ahead...
     
  16. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,762
    Likes Received:
    2,639
    Location:
    Maastricht, The Netherlands
    It means you've given CPU a budget to work with. This then means you can think about priority, latency and what not. Every 'frame' (clock cycles equivalent) you can say: prioritise up to x cycles for CPU, then service GPU. It's probably a huge oversimplification, but it seems ... Unlikely to say the least that there wasn't a clear design decision that was made here.

    And incidentally, I thought the days where a memory bus does 'only one thing' were long behind us, but I am certainly no low level expert.
     
  17. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,326
    There are dozens upon dozens of research papers on this subject that presents dozens upon dozens of solutions to tackle this issue. None I have read have stated this is a difficult problem to overcome just that giving simple preference to the cpu or gpu when accessing memory isn't a robust solution.

    That being said, its going to be hard to determine the effect of memory contention on the PS4, if you don't look at the fact that PS4 isn't a discrete cpu/gpu set up but probably a HSA design which minimizes data copying between different pools of memory normally employed by a discrete system. Thereby reducing bandwidth pressure, memory contention as well as latency.
     
    #37 dobwal, Sep 26, 2013
    Last edited by a moderator: Sep 26, 2013
  18. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,326
    There are dozens upon dozens of research papers on this subject that presents dozens upon dozens of solution to tackle this issue. None I have read have stated this is a difficult problem to overcome just that simply giving preference to the cpu or gpu when accessing memory isn't a robust solution.

    That being said it going to hard to determine the effect of memory contention on the PS4, if you don't look at the fact that PS4 isn't a discrete cpu/gpu set up but probably a HSA design which minimizes data copying between system and video ram needed by discrete system. Thereby reducing bandwidth pressure as well as latency.
     
  19. DrJay24

    Veteran

    Joined:
    May 16, 2008
    Messages:
    3,894
    Likes Received:
    634
    Location:
    Internet
    :roll:

    What is this thread, internet warriors have discovered what game devs and Sony/AMD engineers have not? Maybe Sony found some holes they can access to reduce contention?
     
  20. Pixel

    Veteran

    Joined:
    Sep 16, 2013
    Messages:
    1,008
    Likes Received:
    477
    Why are you saying this? I don't get it. We are the last to discover it on earth.
    We are discovering it long after the Sony/AMD engineers have complete knowledge and understanding of it. Long after game studios have discovered it. We are the last on earth to discover it, and its just a result of all the gameplay videos we've seen from all the games.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...