The ESRAM in Durango as a possible performance aid

Discussion in 'Console Technology' started by Rangers, May 4, 2013.

  1. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    AMD is producing caches at 32MB and above now?.
     
  2. Michellstar

    Regular

    Joined:
    Mar 5, 2013
    Messages:
    662
    Likes Received:
    380
    "not that big though."

    Anyway, Is it 8mb but 32nm SOI?
    Which is the largest part at TSMC, 28 nm bulk isn´t it?

    Or more precisely, How much sram does it have a high end discrete AMD GPU?
     
    #142 Michellstar, Jun 5, 2013
    Last edited by a moderator: Jun 5, 2013
  3. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I do not want to give much credit to what at this point looks like a lot of fud but I guess like in others fields that involve a fair amount of complexity black swans are set to happen.
     
  4. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Theres your issue right there it would seem.
     
  5. Michellstar

    Regular

    Joined:
    Mar 5, 2013
    Messages:
    662
    Likes Received:
    380
    I´ll guess will find out eventually, i hope they disclose the clocks soon
     
  6. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    If Intel with Iris pro has gone with off-die EDRAM instead of on-die ESRAM and they are already in 22nm and have the best process engineers of the world, we should wonder why MS shouldn´t have problems producing a stable and functional 32 MB ESRAM configuration and not the contrary. What is the maximun amount of ESRAM Intel has put in its CPUs?. 12-16 MBs in Xeon server processors?. And how much do they cost?.

    MS should scrap X1 all together and go with a discrete CPU + discrete GPU and go for GDDR5 like Sony. X1 has more transistors in its chip than a 7970 and its performance could be end being worse than a 7750!.
     
    #146 Love_In_Rio, Jun 5, 2013
    Last edited by a moderator: Jun 5, 2013
  7. Ketto

    Newcomer

    Joined:
    Jul 30, 2012
    Messages:
    39
    Likes Received:
    0
    Location:
    Winter Park, Florida; and London UK.
    I honestly am still very reserved about the entire thing. I doubt it's anywhere near as big as GAF is making it out to be, but that's my personal take on the matter.
     
  8. Michellstar

    Regular

    Joined:
    Mar 5, 2013
    Messages:
    662
    Likes Received:
    380
    I think that choosing esram was more a matter of staying in amd tech and in one apu.
    We need more sources, or better quality ones to give credit to this mess.
     
  9. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    Well, we still have no official confirmation of the PS3 GPU clock despite of being rumored of have been downclocked to 500 Mhz eons ago. So figure out...
     
  10. Michellstar

    Regular

    Joined:
    Mar 5, 2013
    Messages:
    662
    Likes Received:
    380
    I didn´t know that Sony never stated gpu clocks on RSX

    Anyway, if this fiasco turns out to be true, can we change the topic of this thread?

    "The ESRAM in Durango as a possible performance hindrance" :???:
     
  11. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    LOL.
     
  12. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    This is a technical thread in the technical forum. The subject of ESRAM impact on GPU performance is independent to the final performance of XB1, so let's keep that discussion in the rumour thread.
     
  13. Cyan

    Cyan orange
    Legend

    Joined:
    Apr 24, 2007
    Messages:
    9,734
    Likes Received:
    3,460
    Ah okay, with the approach of the new technologies the apparent disadvantage isn't there then. I knew the read/write thing was a big difference favouring the new embedded RAM, but the advantages in other areas are striking as well, efficiency-wise. 8 years is a lot of time in the computer space.

    Thanks a lot ERP, best explanation ever! thanks. : ) Now I am going to show off about that when talking to people on the net in regards to the subject.

    This post reminds me why I love Beyond3D so much.

    What would be the equivalent amount of bandwidth -approximately- to 102GB/s in comparison with the technology of the past? I mean, could 102GB/s be like 200GB/s+ of fully uncompressed data? A 100% performance gain can be considered really good.

    Additionally, when you say "you can actually exceed the peak memory bandwidth", I understand you are saying that you can exceed that theoretical peak within the constraints of the 102GB/s of the eSRAM but were that data passing through the eSRAM uncompressed it would be comparable to a much larger uncompressed framebuffer, not that you can actually exceed 102GB/s of data, right? Excuse me if I am wrong.
     
  14. Cyan

    Cyan orange
    Legend

    Joined:
    Apr 24, 2007
    Messages:
    9,734
    Likes Received:
    3,460
    I hope not.

    I kinda loved the architectural design of the machine since the very beginning and accepted the specs as they were.

    Adding GDDR5 at this point seems impossible.

    But you never know, after what happened with the eSRAM and the last-minute changes, everything seems unpredictable. :???:

    I wonder though, after ALL this time now they found out that the eSRAM is causing problems?? Seriously??!!! :shock:

    After the disassembling and teardowns of the machine, we've seen it inside, now that it should be in production? :shock:

    Darn... wouldn't be better to just remove a couple of CUs? What were Microsoft's engineers thinking?? :???:
     
  15. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    About 12MB, probably a bit more as there are a lot of smaller buffers unaccounted for in that number.

    The only thing I could imagine (save for a major planning mistake) would be some kind of timing problems in a large array (opposed to the high number of smaller arrays used in GPUs), roughly along the line what 3dilletante (I think) wrote in another thread. SRAM should neither consume a lot of power (compared to the additional 6 CUs and 16 ROPs Orbis have over Durango) nor should there be serious yield issues (in the sense of nonfunctional dies) if they planned for enough redundancy. SRAM is regularly produced as one of the first functional things on a new process (intel is well known for it; they bragged about their 364MBit SRAM chips with >2.9 billion transistors on 22nm in 2009, roughly 2 years before any CPU using this process came close to market). And even some fundamental timing issues should have been caught early on and fixed in a respin. So I'm a bit puzzled about the alleged reasons for the alleged massive downclock so late in the game.
     
  16. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    Could ESRAM be a hindrance to developers like the EDRAM in 360 was?

    Will developers have to using another form of predictive tiling to maximise space?

    Will using this kind of thing cause over lapping tiles?
     
  17. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,516
    Likes Received:
    24,424
    How exactly was EDRAM a hindrance to developers if they didn't want to utilize it?
     
  18. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    Causing developers to have to run funny native resolutions because of lack of space..

    And that's the whole point, most developers didn't use it making it useless..... So why use a similar thing again....
     
  19. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    That is impossible as everything what was written out through the ROPs ended up in the eDRAM. AFAIK, one couldn't render directly to memory (Durango can, XBox360 doesn't).
     
  20. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Yes, Xbox 360 ROPs are inside the EDRAM die, and write directly to it. If you game needs ROP functionality (depth buffering, stencil, triangle inside test, blending), you need to use EDRAM on Xbox 360. I haven't heard of a single current gen game that doesn't render polygons (even Minecraft does)...

    You can also directly write to memory with MEMEXPORT. But that's similar to compute shader writes. It is direct raw memory write, there's no ROP functionality. So it's not good for traditional triangle rendering algorithms, but very handful for many other purposes (and thus one of my favourite features of Xbox 360 platform).
    10 MB EDRAM is more than enough for 720p forward rendering. However 3 years after Xbox 360 was released deferred rendering was becoming really popular. Most AAA game engines are now using deferred rendering. Nobody could have guessed this development in 2005. Researchers are every year inventing new algorithms that stress GPUs in ways that were not done before. Deferred rendering was one of these things, and it allowed developers to use much higher count of local light sources, and much lower count of shader permutations (less GPU state changes = better GPU efficiency = better performance). A fully optimized g-buffer layout (Crytek 3), uses 12 bytes per pixel. On 720p that is 10.8 MB. Had Microsoft seen 7 years in the future, they had chosen to include 12 MB of EDRAM instead in Xbox 360, and we wouldn't have this discussion :)

    EDRAM was one of the key things that allowed Xbox 360 GPU to over-perform PS3 GPU. PS3 developers needed to use Cell CPU resources to compensate the GPU performance/bandwidth difference. This required lots of extra development time, and basically removed any CPU performance advantage (better physics / AI / etc) PS3 could have had in the long run. Xbox 360 GPU architecture + EDRAM were among the best engineering decisions they made for the last generation hardware.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...