AMD: RDNA 3 Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Oct 28, 2020.

Tags:
  1. manux

    manux Veteran

    2pj/bit would be very good. Things could happen if that materializes.
     
  2. That's literally the figure for the 2017 EPYC .... things happened long time ago.
     
  3. manux

    manux Veteran

    I'm still not holding my breath for consumer chiplet gpu's. Datacenter on the other hand I think is inevitable. I'm happy to be wrong in this one though if consumer chiplet gpu's with no downside on perf turn out sooner than later.
     
  4. Rootax

    Rootax Veteran


    But they're not talking about that. RDNA 3 is their futur gaming arch.

    CDNA X is there compute / science arch.
     
  5. manux

    manux Veteran

    It can be same kind of red herring as gddr6x turned out to be. Sometimes internet thinks it knows and it doesn't know.
     
  6. Bondrewd

    Bondrewd Veteran

    But N31 went past the power-on, lol.
     
  7. manux

    manux Veteran

    I wouldn't know. Perhaps you can link the material that tells what n31 is so rest of us would also know? In internet posts like this are easily ignored without solid sources.
    ah, I guess this is the source:


    80cu chiplet in 5nm sounds odd. It should be possible to fit those 160cu's to single die. Maybe a test vehicle that isn't necessarily intended to be sold to consumers? 80cu chiplet in 7nm could make more sense considering increasing prices of more advanced processes and that 80cu chip is already in production in 7nm
     
    Last edited: Feb 11, 2021
    nutball likes this.
  8. DegustatoR

    DegustatoR Veteran

    They are also hitting 300W power so putting two of them in one product sounds kinda unrealistic. And that's where 5nm comes in.
     
    BRiT likes this.
  9. manux

    manux Veteran

    Drop the clock little bit, 500W water cooled is doable. 3090 cooler probably could do that on air as well.
     
  10. DegustatoR

    DegustatoR Veteran

    3090 is 350W card though. Quite a stretch to 500W.
     
    Cuthalu likes this.
  11. manux

    manux Veteran

    It goes to pretty insane power draws without cooler giving out when overclocked. In it's default config it's very silent card and cooler is not sweating.
     
  12. Bondrewd

    Bondrewd Veteran

    lol
    LOL.
    Please don't.
     
  13. manux

    manux Veteran

    LoL Lol, uh, rofl double lol. You win, I'll press ignore button.
     
    Cuthalu, tinokun, Qesa and 2 others like this.
  14. Malo

    Malo Yak Mechanicum Legend Subscriber

    Bondrewd is back. Such quality discussion when the devoted butt heads.
     
    Cuthalu and Scott_Arm like this.
  15. manux

    manux Veteran

    I guess I was the ant looking at spectacular light effect that turns out to be sun and magnifying glass. Should have known better.
     
  16. Frenetic Pony

    Frenetic Pony Regular

    Ok, here we go, again.

    A reminder: Bandwidth for desktop GPUs is insanely cheap compared to their compute. The entire GDDR6X bus on a 3090 uses up maybe a handful of watts at most. 7.25 picojoules per byte. Picjoules is ^-12 joules and 1 joule = 1 watt without time. So even a terabyte isn't that much (like 7 watts). Bandwidth is only dear compared to mobile power usage. Compared to desktop/HPC stuff, where compute frequencies hit the exponential growth curve hard while bandwidth costs remain constant, bandwidth power usage is negligible. Remember when citing math, to actually do the math.
     
  17. Qesa

    Qesa Newcomer

    It's 7.25 pJ per bit according to micron. You're off by a factor of 8 there.
     
  18. vjPiedPiper

    vjPiedPiper Newcomer

    It's important to remember the difference between what die to die, or die to chiplet, energy costs using a propriety in socket interconnect, vs what an external interface like DDR or HBM costs.
    Also while GPU workloads, probably have higher inter-thread communication, they are also more able to deal with more latency in these workloads - compared to CPU's anyway.

    While I'm no expert in these things, I think that taking the existing 80CU RDNA2 core - with 256Mb SRAM, a slight upgrade to the Raytracing functionality, and modification to the memory interface to work with a host I/O die,
    shrinking it to 5nm, putting 2 of them, and a I/O die, is a realistic option for a potential 7900 type card.

    That 256Mb SRAM on each chiplet helps saves a lot of the bandwidth, some smartish cache management, would allow for some texture duplication in both the SRAM caches.
     
    Dangerman and BRiT like this.
  19. Gubbi

    Gubbi Veteran

    :D

    As Quesa wrote, it's 7.25pJ per bit, så a 3090 uses 19.5*10^9*384*7.25*10^-12 J/s = 54.3W. Not a deal breaker, but not a trivial amount either.

    Cheers
     
    BRiT likes this.
  20. HLJ

    HLJ Regular

    Any realiable numbers on how many Watt's the 128 MB cache in the 6800's series uses?
     
Loading...

Share This Page

Loading...