AMD: RDNA 3 Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Oct 28, 2020.

  1. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    2pj/bit would be very good. Things could happen if that materializes.
     
  2. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,039
    Likes Received:
    286
    Location:
    Luxembourg
    That's literally the figure for the 2017 EPYC .... things happened long time ago.
     
  3. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    I'm still not holding my breath for consumer chiplet gpu's. Datacenter on the other hand I think is inevitable. I'm happy to be wrong in this one though if consumer chiplet gpu's with no downside on perf turn out sooner than later.
     
  4. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    2,060
    Likes Received:
    1,493
    Location:
    France

    But they're not talking about that. RDNA 3 is their futur gaming arch.

    CDNA X is there compute / science arch.
     
  5. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    It can be same kind of red herring as gddr6x turned out to be. Sometimes internet thinks it knows and it doesn't know.
     
  6. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,129
    Likes Received:
    510
    But N31 went past the power-on, lol.
     
  7. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    I wouldn't know. Perhaps you can link the material that tells what n31 is so rest of us would also know? In internet posts like this are easily ignored without solid sources.
    ah, I guess this is the source:


    80cu chiplet in 5nm sounds odd. It should be possible to fit those 160cu's to single die. Maybe a test vehicle that isn't necessarily intended to be sold to consumers? 80cu chiplet in 7nm could make more sense considering increasing prices of more advanced processes and that 80cu chip is already in production in 7nm
     
    #187 manux, Feb 11, 2021
    Last edited: Feb 11, 2021
    nutball likes this.
  8. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    They are also hitting 300W power so putting two of them in one product sounds kinda unrealistic. And that's where 5nm comes in.
     
    BRiT likes this.
  9. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    Drop the clock little bit, 500W water cooled is doable. 3090 cooler probably could do that on air as well.
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    3090 is 350W card though. Quite a stretch to 500W.
     
    Cuthalu likes this.
  11. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    It goes to pretty insane power draws without cooler giving out when overclocked. In it's default config it's very silent card and cooler is not sweating.
     
  12. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,129
    Likes Received:
    510
    lol
    LOL.
    Please don't.
     
  13. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    LoL Lol, uh, rofl double lol. You win, I'll press ignore button.
     
    Cuthalu, tinokun, Qesa and 2 others like this.
  14. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,277
    Likes Received:
    4,709
    Location:
    Pennsylvania
    Bondrewd is back. Such quality discussion when the devoted butt heads.
     
    Cuthalu and Scott_Arm like this.
  15. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    I guess I was the ant looking at spectacular light effect that turns out to be sun and magnifying glass. Should have known better.
     
  16. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    682
    Likes Received:
    363
    Ok, here we go, again.

    A reminder: Bandwidth for desktop GPUs is insanely cheap compared to their compute. The entire GDDR6X bus on a 3090 uses up maybe a handful of watts at most. 7.25 picojoules per byte. Picjoules is ^-12 joules and 1 joule = 1 watt without time. So even a terabyte isn't that much (like 7 watts). Bandwidth is only dear compared to mobile power usage. Compared to desktop/HPC stuff, where compute frequencies hit the exponential growth curve hard while bandwidth costs remain constant, bandwidth power usage is negligible. Remember when citing math, to actually do the math.
     
  17. Qesa

    Newcomer

    Joined:
    Feb 23, 2020
    Messages:
    26
    Likes Received:
    36
    It's 7.25 pJ per bit according to micron. You're off by a factor of 8 there.
     
  18. vjPiedPiper

    Newcomer

    Joined:
    Nov 23, 2005
    Messages:
    117
    Likes Received:
    72
    Location:
    Melbourne Aus.
    It's important to remember the difference between what die to die, or die to chiplet, energy costs using a propriety in socket interconnect, vs what an external interface like DDR or HBM costs.
    Also while GPU workloads, probably have higher inter-thread communication, they are also more able to deal with more latency in these workloads - compared to CPU's anyway.

    While I'm no expert in these things, I think that taking the existing 80CU RDNA2 core - with 256Mb SRAM, a slight upgrade to the Raytracing functionality, and modification to the memory interface to work with a host I/O die,
    shrinking it to 5nm, putting 2 of them, and a I/O die, is a realistic option for a potential 7900 type card.

    That 256Mb SRAM on each chiplet helps saves a lot of the bandwidth, some smartish cache management, would allow for some texture duplication in both the SRAM caches.
     
    Dangerman and BRiT like this.
  19. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,617
    Likes Received:
    1,047
    :D

    As Quesa wrote, it's 7.25pJ per bit, så a 3090 uses 19.5*10^9*384*7.25*10^-12 J/s = 54.3W. Not a deal breaker, but not a trivial amount either.

    Cheers
     
    BRiT likes this.
  20. HLJ

    HLJ
    Regular Newcomer

    Joined:
    Aug 26, 2020
    Messages:
    385
    Likes Received:
    633
    Any realiable numbers on how many Watt's the 128 MB cache in the 6800's series uses?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...