Samsung GDDR6 expected with 14-16 Gbps from 2018

Discussion in 'Graphics and Semiconductor Industry' started by Erinyes, Aug 22, 2016.

  1. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    92
    Wasn't sure exactly where to post this..figured this is the best place and it wouldn't clutter up any GPU/Architecture threads.

    At Hotchips 28, Samsung just announced GDDR6 for 2018. G6 will start off where G5X ends, i.e. 14 Gbps, and will scale to 16 Gbps. Runs at the same 1.35V as G5X, but Samsung's graphs show it consuming less power (~30% lower mw/Gbps if I'm reading that chart right). This looks like it is timed for the next gen 10nm GPUs.

    There's also a new LPDDR5 standard with 6.4 Gbps planned for 2018, though this is more relevant for mobile.

    https://www.computerbase.de/2016-08/samsung-gddr6-14-16-gbps-2018/
     
  2. Wynix

    Veteran Regular

    Joined:
    Feb 23, 2013
    Messages:
    1,052
    Likes Received:
    57
    With low cost HBM, who is the intended target of GDDR6?
     
  3. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Lower cost of the HBM2 and initial HBM, but should still be more costly than a pool of 4GB of GDDR6 for middle range, low end GPU, notebook one.. But hard to say for future, who know what will be the price on 2018 and the product who can use it.

    If they was announce this for the debut of 2017 ok, but 2018, i dont know.
     
    #3 lanek, Aug 23, 2016
    Last edited: Aug 23, 2016
    Razor1 and AlBran like this.
  4. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    A better word would be "lower" instead of "low".

    The fundamentals don't change. It's still a bunch of dies stacked on top of each other on top of an interposer. It's always going to be much more expensive.

    The removal of the buffer die and ECC memory (which I thought was optional already for HBM2?) seem to be the biggest cost reduction factors.
     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    DDR4 memory is starting to stack going forward as well. Some other memory manufacturer slides showed DRAM processes are hitting a scaling and mask layer wall as well, so cost scaling without stacking is diminishing.

    In the more premium space, that might take things down to maybe the base die and interposer being different. Reduced-cost HBM would seemingly bring the difference down to the interposer, with organic interposer (maybe fan-out package someday?). EMIB or something similar might reduce the difference if it were used here. Seemingly, that would put a burden on the silicon interposer path to double down on providing some kind of density scaling for its TSV and bumps, which some like Intel have been critical about going forward.

    Without the base die housing routing, test, and other functions for yield and management, where would it go to avoid a hit to yields? Does this put more work on the main die, or possibly making the HBM stack layers more independent and doing similar stacking to some of Samsung's VNAND with a step pyramid stack and a per-layer "base section"?

    Dunno about the ECC item if these stacks are threatening to put tens of GB in a hot stacked configuration. Maybe we'd be better off in general if computing devices made it more standard for general data integrity and security purposes? Row hammer exploits tend to throw ECC events before succeeding, for example.
    HBM2 might nominally have ECC as optional, but it's possible given its apparent network focus and high-end compute GPU customers that there's not enough of a customer pool to justify ECC and non-ECC die variations.
     
    Grall likes this.
  6. Wynix

    Veteran Regular

    Joined:
    Feb 23, 2013
    Messages:
    1,052
    Likes Received:
    57
    It might just be a back-up plan.
    If the costs of HBM do not drop enough by 2018, it gives customers more choice for low cost, high bandwidth memory.
     
  7. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,837
    Likes Received:
    4,455
    For cases where investing in an interposer doesn't make much sense, financially..
     
  8. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    92
    I was just thinking about this..what is lower power..HBM2 or LPDDR4X/LPDDR5? (Assuming similar bandwidth and capacity)

    It should be HBM2 I suppose but if anyone with knowledge on this can chip in that would be appreciated! And if so..does using HBM2 for an SoC instead of LPDDR make any sense?
     
  9. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    The JEDEC standard has been around since july and nothing happened in this thread, hence de-necroing it. :)
    https://www.jedec.org/document_search?search_api_views_fulltext=GDDR6
    (account to access Spec is free, btw)

    What's the big deal about G6 now? From a first glance at the bullet points I do not see much that would prevent G5X to be expanded to encompass G6. Is it just politics, since apparently G5X was mostly the work of a single manufacturer?
     
    Grall likes this.
  10. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    513
    Likes Received:
    234
    Samsung promised 16Gb/16Gbps chips.
     
  11. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Yeah, I got that. But what's keeping G5X from doing that as well? Micron-proprietary stuff?
     
  12. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    513
    Likes Received:
    234
    Who knows.
    Micron itself jumped on the G6 train.
     
  13. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    609
    Likes Received:
    1,036
    Location:
    PCIe x16_1
    In a year? Hard to say. Right now? This stuff's hard. I know the Micron guys have been working on this for a while, and we only got 11Gbps cards earlier this year. The memory bus aspects have not been fun.
     
    BRiT likes this.
  14. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Maybe part of it comes back to the single channel vs dual - I appreciate this is not same context as PC RAM.
    Also, would be a pain to find but Nvidia engineer did mention part of the headache around G5X and its speed problems in certain scenarios such as crypto mining was the requirement on how the memory is managed in terms of transmission-control-accessed with G5X and the changes they at Nvidia had to make to support it.
    If the issue is only particular to G5X, makes me wonder if Nvidia will replace it for all future cards and not just GDDR6 on say a Titan Geforce/Tesla/Quadro.
    Yeah they seem very similar, but seems some subtle differences between the two memory solutions exist.

    Worth noting Micron does have 12Gbps available as well for G5X and been so for a little while.
    However I would be surprised if Samsung has their GDDR6 at 16Gbps available anytime soon next year as it feels more like lab conditions, remember Micron has managed 14Gbps in the labs with G5X quite some time ago.
     
    #14 CSI PC, Nov 20, 2017
    Last edited: Nov 20, 2017
    pharma and BRiT like this.
  15. huebie

    Newcomer

    Joined:
    Apr 10, 2012
    Messages:
    29
    Likes Received:
    5
    There is no "Micron-proprietary stuff". See JEDEC spec sheet. GDDR5X was validated with 14 Gbps and GDDR6 is the next iteration as it is mostly G5X. Iirc the tx/rx boundaries are more strict to achieve higher clockrates. This is obtained by a broader clock noise reduction or for my understanding a broader "frontend" (mux, shuffle, clock count etc.). I bet the more complicated side is in the GPU-MC rather the DRAM Chip itself... but just guessing.

    Edit: Ok i should mention the process node and material gains. :D
     
  16. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Holy post resurrection batman! I saw the JEDEC spec sheet, that's exactly why I was asking: 16 Gb dies are entirely within the spec and Micron themselves gave 14-16 Gbps as a long term target for G5X - which is now/soon featured in G6. And since it's not exactly the norm to see a certain type of DRAM being manufactured only a single vendor, I was wondering about the cause.

    More strict specs with better noise reduction makes sense though. But why incorporate this into a new standard?
     
    #16 CarstenS, Dec 19, 2017
    Last edited: Dec 19, 2017
  17. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    Interesting that this thread should bob up to the surface as I was wondering to myself just this morning how fast we might be able to go today on a modern chip process with AMD Hawaii's 'small' GDDR5 controllers. Maybe not exotic speeds, but surely a bit faster than the 6000MT rate the R390 runs at...?

    If so, maybe there could be life still in a very wide and slower DRAM bus like Hawaii's? :p
     
  18. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    513
    Likes Received:
    234
    You don't go for 512b bus if you want bandwidth.
    You use HBM.
     
  19. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    I can see where Grall is coming from; from a cost perspective GDDR5/6 makes more sense than HBM for high performance below flagship models such as 390/390X, but then the trend is either high bandwidth-bit bus with cost (HBM2) or bus efficiency and that is where one would say 390/390X has gone down the path of with Polaris.
    So yeah the days of the 512b bus are gone but not necessarily just because of HBM when one considers the tier such as 390/390X.
    Nice to wonder though what could had been with such a solution as Grall mentions on a new top tier Polaris or enthusiast Hawaii.
    Closest to that we see these days is from Nvidia and they only go to 384-bit bus on models that would be a tier above 390x anyway.
     
    #19 CSI PC, Dec 19, 2017
    Last edited: Dec 19, 2017
  20. huebie

    Newcomer

    Joined:
    Apr 10, 2012
    Messages:
    29
    Likes Received:
    5
    I think the main reason is, that NVIDIA and Micron teamed up behind closed doors and came to public when the results where promising enough and a concrete product was already in mass production. Other manufactures such as Toshiba or Hynix wouldn't have sold a single G5X module since the few Pascal product were offering this type. A switch (setup time) take almost 7 weeks from flsh to dram and validating a new type of DRAM... who knows exactly. I don't think that NV and Micron had a specific contract but Micron was the only one who covered the requested amount in the vast majority of time. So to put it in a nut-shell: There was no need for other manufacturer to produce G5X. :)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...