AMD: RDNA 3 Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Oct 28, 2020.

Tags:
  1. Leoneazzurro5

    Leoneazzurro5 Regular

    Those for the VRAM
     
  2. The memory controllers are supposed to be on the GCDs, not on the cache dies.
     
  3. Leoneazzurro5

    Leoneazzurro5 Regular

    Who said that?
     
  4. Granath

    Granath Newcomer

    some senior leak developer ;)
     
    Lightman likes this.
  5. Leoneazzurro5

    Leoneazzurro5 Regular

    All the leaks I saw so far have no data about the structure of the memory bus, and I saw a lot of them. Could you please give me a link?
    Putting the I/O on 5nm die instead of a cheaper 6nm die is a waste. Moreover, there are patents from AMD which hint the opposite.
     
  6. Leoneazzurro5

    Leoneazzurro5 Regular

    I am quite sure to have seen other patents showing a different arrangement of the interposer with cache. Btw, we'll see what it really is when RDNA3 will launch. To me, having the membus on the compute chiplets is really a waste (I/O scales a lot worse than compute, and on expensive 5nm it will add on the costs. Moreover, adding the cache chip on the top will only worsen the thermal transfer on the hotter parts (the compute dies).as the heat spreader is on the top. But, as said, we'll see.
     
  7. Granath

    Granath Newcomer

    even in this topic, our senior leaker stated that IC and IO will be on 6nm and placed on MCD
     
  8. Leoneazzurro5

    Leoneazzurro5 Regular

    That's what I am saying. GCDs are on 5nm.
     
  9. Yes, to me it also sounds a bit counterintuitive that they're putting the cache and/or IO chips in between the heatsink and the compute dies that are probably producing some >80% of the heat.


    Zen 3DVC will use 3 types of chips: I/O chip, CCD and cache chip. In the CPU side they're indeed doing that distinction already.

    On the GPU side it could be that AMD is planning on extending their modularity options towards the cache chips, perhaps with them producing the same v-cache chips that can go towards either GPUs or CPUs, or having the same VCache chips serving different generations of GPUs, for example.
    That means the I/O would need to not be inside the cache chips, as those same chips could be paired with SoCs that use very different memory technologies (DDR4, DDR5, LPDDR5, GDDR6, GDDR7, HBM2E, HBM3, etc.). By putting the PHYs inside the cache chips you're limiting the type of solutions those cache chips can be used on.
     
    Lightman likes this.
  10. Leoneazzurro5

    Leoneazzurro5 Regular

    Quite frankly, the only memory type used for high-performance GPUs today and in the foreseeable (short-medium term) future is GDDR6 (6X counting Nvidia solutions, but so far AMD did not show any sign of wanting to use that). DDR4, DDR5, LPDDR5 are for low performance solutions which quite probably will not need stacked cache. HBM is still expensive, to the point where IC was seen as a viable alternative to it. GDDR7 is not even a thing as today and next year. While I understand your point, the more and more the process node will shrink, the more the cost for I/O on the GCDs will increase (imagine almost the same area used for I/O on 5nm and 3nm, with the latter process being 30-40% more expensive...).
     
  11. Bondrewd

    Bondrewd Veteran

    We're gonna see it in DC parts too later on.
    HBM isn't quite scaling as fast as everyone would've liked it to.
     
    Lightman likes this.
  12. I meant to say that if the VCache chips are really just dumb cache, they could be using the same VCache chips for CPUs and GPUs of different generations.

    If AMD makes a very large order of 128MB and 256MB cache chips made on TSMC's N6, they could use these very same chips on top of Zen4 CPUs, Zen4 APUs, RDNA3, RDNA4, etc.

    The advantage of such a solution is that it would not be only usable in the short-medium term. AMD could allocate production of the same VCache chips for over 3 years and ensure that a significant component of their GPU, CPU and APU offerings is taken care of, and throughout those 3 years we could see adoption of a number of the memory technologies I mentioned.
    It would be an especially interesting proposition when considering the IHVs' success for the next few years may be more related to how many products they can sell (i.e. how dependent they are on the state-of-the-art fab processes and how much of those they can allocate) than how well they perform.




    As for DDR5 / LPDDR5 solutions, we're talking APUs / SoCs here. IIRC part of the whole point of adopting IC was to be able to bring higher performance to SoCs that are stuck to slower memory.
    Infinity Cache is most probably coming to low-power APUs eventually.
     
  13. Leoneazzurro5

    Leoneazzurro5 Regular

    Yes, I understood your point.but, as said, the issue is mostly the balance among the various dies and packages costs. It may be more cost effective to have dumb cache dies, but I highly doubt it.
     
  14. Jawed

    Jawed Legend

    iroboto and Lightman like this.
  15. Granath

    Granath Newcomer

  16. Jawed

    Jawed Legend

    Very sloppy leaker who mixes guesses amongst "leaks".

    Later, he claims that WGP counts and clocks are the leaks.
     
  17. gamervivek

    gamervivek Regular

    The leaker says that the memory and cache sizes is his guess. I doubt AMD are going to be only putting on 16GB.



    RDNA3 looks like quite a ways off however, we're still getting these numbers and guesses and nothing like driver leaks and other stuff for Navi2x
     
  18. Rootax

    Rootax Veteran

    Or maybe, this time, they're just controlling the informations better ?Pre- RDNA2 période was so leaky...
     
  19. DegustatoR

    DegustatoR Veteran

    There are zero reasons to put anything more than 16GB on a gaming videocard for the foreseeable future.
     
    milk, Jawed, PSman1700 and 3 others like this.
Loading...

Share This Page

Loading...