NVidia Ada Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Jul 10, 2021.

Tags:
  1. Picao84

    Veteran

    Joined:
    Feb 15, 2010
    Messages:
    2,109
    Likes Received:
    1,195
    I don't think it's comparable, as a GPU is not bundled with a CPU..
     
  2. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    Is it the same source who said that SEC couldn't produce big Ampere dies at any sufficient quantity?
    Today yields are good at Samsung. Not exceptional as TSMC 5nm but NV is not restricted by SEC and huge price difference compensate
     
  3. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    Ultimately, the future of hardware design is going to depend on what kind of rendering architecture developers are going to make ...

    Going for a larger bus-width leaves a smaller window of improvement for shading performance because of higher fixed power consumption so this change will negatively skew renderers that rely on forward shading or more complex single pass shading. Using an on-chip cache makes the memory hierarchy more complex which will mean that deferred shading becomes more sensitive against the size of the G-buffer but power consumption becomes more manageable in this scenario ...

    The idea behind a visibility buffer is to do an "early split" in the shading pipeline so it's compatible with either deferred or forward shading along with many their respective benefits/drawbacks ...
     
    Dictator and pjbliverpool like this.
  4. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,451
    Likes Received:
    471
    "Samsung Electronics’ state-of-the-art V1 Line at Hwasung Campus still has yield problems, with the yield of some 5-nm products remaining below 50 percent, multiple industry insiders said on July 4."
    http://www.businesskorea.co.kr/news/articleView.html?idxno=71056

    Samsung's 4LPA seems to be renamed 5LPA and the leaked roadmap more or less confirms that (5LPA disappeared, 4nm generation appeared). It also confirms that 3GAE was cancelled (that means that the first Samsung's GAA proces is postponed to 2023).
     
    Silent_Buddha and Lightman like this.
  5. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    [semiot]
    If I'm not mistaken, Lovelace codename was already confirmed for something other than GPU architecture.
    The whole confusion started when some leaker just dropped the codename with no references to what it was and the rumorboat started sailing because clearly NVIDIA has codenames only for GPU architectures
    [/semiot]
     
  6. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    I was talking about current 8nm Nvidia is using. For next gen, Samsung still has a full year to improve the EUV nodes
     
    PSman1700 likes this.
  7. I think most rumors and reports from Taiwan suggests NV going with N5 as well.
    But if there is something to be learnt, going with 8N was a brilliant idea because of uncontested wafer allocation.
    So I would hazard a guess it will be a split between N5 and 5LPE/5LPP for NV. For Auto it is already known the process for the next gen Tegra devices would be 8LPP+/8LPA
    AMD also rumored to be going with N5P and N6 (Different Fab lines).

    Also official information(I have a link but it is paywalled), says 5LPP not 5LPA. 8LPA is there however.
     
  8. itsmydamnation

    Veteran

    Joined:
    Apr 29, 2007
    Messages:
    1,349
    Likes Received:
    470
    Location:
    Australia
    Samsung disagrees with GAE being cancelled
    https://www.anandtech.com/show/16815/samsung-deployment-of-3nm-gae-on-track-for-2022

     
    HLJ, Lightman, Krteq and 1 other person like this.
  9. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Is it? Last time I checked G6 topped out at 18Gbps while G6X could hit 21.
     
    HLJ, BRiT and PSman1700 like this.
  10. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    20gbps has been announced for this year, at least the intention to start production. Considering the apparent cost and yield issues of GDDR6X, with no particular speed advantage as even the 3090 only hits 19.5gbps, it doesn't seem to make a lot of sense to stick with 6x
     
  11. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Could you point me to the announcement? The only thing I see on 20Gbps G6 is an OC thing from Micron from 2018.
     
    PSman1700 likes this.
  12. Phantom88

    Newcomer

    Joined:
    May 11, 2021
    Messages:
    193
    Likes Received:
    579

    Nvidia's roadmap for "ampere next and ampere next next" puts them at 2 year cycles. Same launch as Ampere, around sept 2022 and sept 2024. I'd say the launch is also depending on what AMD does. Nobody is gonna launch the next gen in 2023 and leave AMD to have the performance crown if they have rdna 3 for 2022.
     
  13. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    They also disagree with their EUV usage being shit, but it is.
    3GAE is ded.
     
  14. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Do you hear Intels i740, TurboCache and Hypermemory silently weep?
     
    Kej, Lightman and nnunn like this.
  15. itsmydamnation

    Veteran

    Joined:
    Apr 29, 2007
    Messages:
    1,349
    Likes Received:
    470
    Location:
    Australia
    how dead is dead?
    gone dead,
    cannonlake / intel first gen 10nm dead
    or like TSMC 20nm dead ( it just wasn't good, so no one really used it (yes i know phone soc's did and some random SPARC chips))
     
  16. JasonLD

    Regular

    Joined:
    Apr 3, 2004
    Messages:
    463
    Likes Received:
    105
    3GAE probably gonna be used for their own Exynos, just not for the customers I think.
     
  17. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Basically this give or take.
    If we're extra lucky it gets the 14LPE treatment aka usable but nowhere near cost competitive.
    Ironically enough N3 is N20 remake and everyone is using it.
    Fucking a rotting carcass ain't fun it seems...
     
  18. The i740 did have dedicated memory.. was it able to allocate system memory?
    The other two had terrible 3D performance indeed, but it was sufficient for the low-end office and multimedia needs, in a time when the GMA900 had no H264 or VC-1 acceleration.

    Regardless, my comment was related to the eventual rise of 3D-stacked LLC in GPUs and the acceleration of PCIe bandwidth.
    In a little over 2 years we should have 128GB/s duplex bandwidth from a PCIe 16x connector and more than 128GB/s over a 2-DIMM DDR5 motherboard.

    Assuming 3D-stacking and cache ICs get cheaper as adoption rises, it could be more cost-efficient to get a midrange GPU that uses e.g. a 3D-stacked cache of 256MB covering over 75% of the bandwidth needs in a gaming load, and then just use the PCIe at ~100-120GB/s to cover the rest.

    Perhaps this still isn't possible with PCIe 6.0 and DDR5, but on each iteration of PCIe and GPU architectures with larger LLC chunks I think we're getting closer to VRAM-less architectures.


    Rotting carcass being.. N5?
     
  19. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    At some size LLC will just become VRAM, and nothing will change. The idea of using system memory for graphics never panned out and it's unlikely that it ever will - PCIE is just too long to provide enough bandwidth.
     
    Picao84, pharma, xpea and 1 other person like this.
  20. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,451
    Likes Received:
    471
    i740 was designed when memory was extremely expensive. Intel's idea was to save expenses by using AGP texturing (so textures will stay in system memory) and equip it with a small amount of onboard memory for operating buffers. It wasn't a bad concept, but memory prices fell and the expected advantage was lost. In fact it was a disadvantage, because the GPU wasn't able to load textures from local memory, so adding more onboard memory didn't solve the problem. i740 became a low-end product, but other manufactures offered low-end products with PCI bus and that wasn't possible with i740 due to the dependence on AGP texturing. Real3D prepared PCI version of i740. It was based on an addition controller between the PCI and GPU and exquipped with a second pool of onboard memory, which was used solely for textures.
     
    Kej, Silent_Buddha, Lightman and 5 others like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...